OpenAI and Anthropic inked agreements with the US government to allow authorities to research, test and evaluate new AI models prior to public release.  

Each company’s Memorandum of Understanding (MoU) establishes the framework for the US AI Safety Institute to receive early access to major new models and continue to study them after release.

The MoUs also cover collaborative research to evaluate the capabilities and potential risks of the AI models, along with methods to mitigate any dangers. 

AI companies are facing increased regulatory scrutiny over the ethical use of large language models and how they are trained.  

“These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI,” said Elizabeth Kelly, director of the US AI Safety Institute.

The institute plans to provide feedback to Anthropic and OpenAI on potential safety improvements to their models in collaboration with its partners at the UK AI Safety Institute

The US AI Safety Institute is located within the US Department of Commerce’s National Institute of Standards and Technology.