The Uncertain Future of the U.S. AI Safety Institute

author
By Tanu Chahal

23/10/2024

cover image for the blog

The U.S. AI Safety Institute (AISI), one of the few government bodies focused on evaluating the risks associated with artificial intelligence, faces an uncertain future. Established in November 2023 under President Joe Biden’s AI Executive Order, the AISI operates within the National Institute of Standards and Technology (NIST), a division of the Commerce Department. Its mission is to provide guidance on the safe deployment of AI technologies. However, the institute’s existence hinges on the continuation of this executive order, which could be easily repealed by a future administration.

As it stands, the AISI has a director, a $10 million budget, and a research collaboration with its U.K. counterpart, the U.K. AI Safety Institute. However, without formal authorization from Congress, the institute remains vulnerable. According to Chris MacKenzie of Americans for Responsible Innovation, a repeal of the AI Executive Order by a new administration, such as a potential return of Donald Trump, who has expressed intentions to revoke it, would result in the AISI’s dismantling. Congressional authorization, on the other hand, would secure the institute’s future regardless of presidential changes.

Securing formal authorization would not only safeguard the AISI's future but could also lead to more stable, long-term funding. Currently, the AISI’s budget is relatively modest, particularly considering the large concentration of AI labs in Silicon Valley. As MacKenzie points out, Congress is more likely to allocate higher funding to entities that have long-term support and authorization from lawmakers.

A coalition of over 60 companies, nonprofits, and universities, including major AI players like OpenAI and Anthropic, have urged Congress to pass legislation ensuring the AISI’s formal authorization before the end of the year. This would ensure continued collaboration on AI research and testing, furthering the institute’s mission to establish AI safety benchmarks.

Bipartisan bills supporting the AISI’s activities have already made progress in both the Senate and House, but they face opposition from conservative lawmakers, including Senator Ted Cruz, who has pushed for changes to certain provisions, such as diversity programs within the bill.

Despite its voluntary standards and relatively limited enforcement power, the AISI is seen by many in the tech industry as a key player in shaping future AI regulations. Companies like Microsoft, Google, Amazon, and IBM view the institute as a promising step toward establishing industry-wide AI benchmarks that could influence future policies.

There is also concern that dissolving the AISI could put the U.S. at a disadvantage in the global race for AI leadership. In May 2024, international leaders at an AI summit in Seoul agreed to form a global network of AI safety institutes, including nations like Japan, France, Germany, and South Korea. The U.S. is part of this initiative, but without the AISI, its position could weaken.

Jason Oxman, president and CEO of the Information Technology Industry Council, emphasized the importance of the AISI in maintaining U.S. leadership in AI development. He urged Congress to pass bipartisan legislation to solidify the institute’s role in advancing AI innovation and adoption in the U.S., ensuring the nation does not fall behind on the global stage.