NIST Establishes AI Safety Consortium

The National Institute of Standards and Technology established the AI Safety Institute on Feb. 7 to determine guidelines and standards for AI measurement and policy.

An interesting omission on the list of U.S. AI Safety Institute members is the Future of Life Institute, a global nonprofit with investors including Elon Musk, established to prevent AI from contributing to “Extreme large-scale risks” such as global war.

The U.S. AI Safety Institute was created as part of the efforts set in place by President Joe Biden’s Executive Order on AI proliferation and safety in October 2023.

In the U.S., AI safety and regulation at the government level is handled by NIST, and, now, the U.S. AI Safety Institute under NIST. The major AI companies in the U.S. have worked with the government on encouraging AI safety and skills to help the AI industry build the economy.

Share this article on social media:

Subscribe to Our Newsletter!
Stay on top of cybersecurity risks, evolving threats and industry news.
This field is for validation purposes and should be left unchanged.

Recent News

Featured Services

The Latest Cybersecurity News

From major cyberattacks, newly discovered critical vulnerabilities to recommended best practices, read it here first:

BOOK A MEETING

Enter your Email Address

This field is for validation purposes and should be left unchanged.

* No free email provider (e.g: gmail.com, hotmail.com, etc.)

This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.