March 5, 2024

NIST establishes AI Safety Institute Consortium in response to Biden executive order

Documentation from NIST states the consortium will adopt a “broad human-centered focus” with “specific policies.”

The United States National Institute of Standards and Technology (NIST) and the Department of Commerce are soliciting members for the newly-established Artificial Intelligence (AI) Safety Institute Consortium. 

In a document published to the Federal Registry on Nov. 2, NIST announced the formation of the new AI consortium along with an official notice expressing the office’s request for applicants with the relevant credentials.

Per the NIST document:

“This notice is the initial step for NIST in collaborating with non-profit organizations, universities, other government agencies, and technology companies to address challenges associated with the development and deployment of AI.”

The purpose of the collaboration is, according to the notice, to create and implement specific policies and measurements to ensure US lawmakers take a human-centered approach to AI safety and governance.

Collaborators will be required to contribute to a laundry list of related functions including the development of measurement and benchmarking tools, policy recommendations, red-teaming efforts, psychoanalysis, and environmental analysis.

These efforts come in response to a recent executive order given by US president Joseph Biden. As Cointelegraph recently reported, the executive order established six new standards for AI safety and security, though none appear to have appear to have been legally enshrined.

Related: UK AI Safety Summit begins with global leaders in attendance, remarks from China and Musk

While many European and Asian states have begun instituting policies governing the development of AI systems, with respect to user and citizen privacy, security, and the potential for unintended consequences, the U.S. has comparatively lagged in this arena.

President Biden’s executive order marks some progress toward the establishment of so-called “specific policies” to govern AI in the US, as does the formation of the Safety Institute Consortium.

However, there still doesn’t appear to be an actual timeline for the implementation of laws governing AI development or deployment in the U.S. beyond legacy policies governing businesses and technology. Many experts feel these current laws are inadequate when applied to the burgeoning AI sector.