Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
,
Standards, Regulations & Compliance
Nations to Jointly Develop Evaluation Mechanisms and Risk Mitigation Guides
The United States and the United Kingdom signed a landmark artificial intelligence agreement to work together to develop tests for the most advanced AI models and share research capabilities.
See Also: OnDemand | The Underwriting Acceleration Playbook: 5 Ways to Speed Time to Quote
Beginning Monday, the partnership will allow the U.S. and the U.K. AI Safety Institutes to share information and expert personnel and to jointly develop safety evaluation mechanisms and guidance for emerging risks.
The pact makes good on the commitment made at the Bletchley Park AI Safety Summit last November, at a time when think tanks and governments were pushing for a shared global approach to AI safety.
The countries also committed to developing similar partnerships with other nations to promote AI safety across the globe. Several intergovernmental guidelines such as those agreed to by partner countries in the OECD and the G7 already exist, and they aim to determine ways to set up guardrails that can curb the risks brought on by the rapid proliferation of AI, including misinformation and election fraud.
U.S. Secretary of Commerce Gina Raimondo has called AI the “defining technology of our generation.” She said the partnership will accelerate work across the full spectrum of risks to national security and the broader society, and “we aren’t running away from these concerns – we’re running at them.”
Raimondo said she expects the collaboration to lead to a better understanding of AI systems, robust risk evaluations and rigorous development and implementation guidance.
The partnership will enable both countries to align their scientific approaches and iterate evaluation mechanisms for AI models, systems and agents, the announcement says. The U.S. and U.K. AI Safety Institutes have already laid out plans to build a common approach to test AI safety and jointly address the identified risks and to share fundamental technical research on AI safety and security. They also intend to perform at least one joint testing exercise on a publicly accessible model.
Countries globally have doubled down on building guardrails to foster the development and implementation of safe and responsible AI. While the White House’s AI executive order calls for federal agencies to determine and address AI use and risks within stipulated deadlines, the European Union implemented the most comprehensive national AI legislation yet with the AI Act.