QUICK BITE
- US AI Safety Institute partners with Anthropic and OpenAI for pre-release access to new AI models, focusing on safety research and risk reduction.
- The collaboration aims to advance AI safety science, with the institute providing feedback to improve model safety and supporting NIST’s AI work across various risk areas.
- The agreement aligns with the Biden-Harris administration’s AI Executive Order, promoting safe and trustworthy AI development through third-party testing and safety checks.
The U.S. Artificial Intelligence Safety Institute, part of the National Institute of Standards and Technology (NIST) under the Department of Commerce, has made agreements with Anthropic and OpenAI to work together on AI safety research.
These agreements give the Institute early access to new AI models from both companies, before and after they are released to the public. This collaboration will focus on studying the capabilities and safety risks of these models and finding ways to reduce those risks.
“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” said Elizabeth Kelly, director of the U.S. AI Safety Institute.
“These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”
Additionally, the U.S. AI Safety Institute, working closely with its U.K. counterpart, will provide feedback to Anthropic and OpenAI on improving their models’ safety. Also, the Institute continues NIST’s long tradition of advancing technology, standards, and related tools. The evaluations under these agreements will support NIST’s AI work through collaboration and research on advanced AI systems across various risk areas.
These evaluations will also help promote the safe and trustworthy development of AI, building on the Biden-Harris administration’s Executive Order on AI and the voluntary commitments made by top AI developers.
ALSO READ: Tech Giants Meta and Spotify Challenge EU’s AI Regulations
The group was formed after the Biden-Harris administration issued the U.S. government’s first executive order on artificial intelligence in October 2023. This order called for new safety checks, guidance on equity and civil rights, and research on AI’s impact on jobs.
“We are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models,” said OpenAI CEO Sam Altman.
“Looking forward to doing a pre-deployment test on our next model with the US AISI! Third-party testing is a really important part of the AI ecosystem and it’s been amazing to see governments stand up safety institutes to facilitate this,” said Jack Clark, Co-Founder of Anthropic.
The news follows reports that OpenAI is discussing raising funds that could value the company at over $100 billion. Thrive Capital is expected to lead the round with a $1 billion investment, according to a source who wished to remain anonymous due to the confidential nature of the details.
Anthropic, a company started by former OpenAI researchers, was recently valued at $18.4 billion. Amazon is a major investor in Anthropic, while Microsoft strongly supports OpenAI.