US Launches AI Safety Institute
In a significant move, the United States is set to create an AI safety institute dedicated to assessing the potential risks associated with cutting-edge artificial intelligence models. This initiative was announced by Secretary of Commerce Gina Raimondo.
Secretary Raimondo emphasized the collaborative nature of this venture. In her address at the AI Safety Summit in Britain, she called upon experts from academia and industry to join this consortium. Recognizing that addressing AI risks is a collective effort, she stressed the vital role the private sector must play.
The White House released a new executive order on AI. Meanwhile, the UK AI Safety Summit is in full swing.
— Center for AI Safety (@ai_risks) October 31, 2023
We discuss these developments and more in this week's AI Safety Newsletter:https://t.co/vkVEjgPzUk
As part of the commitment to strengthening AI safety efforts, Secretary Raimondo expressed the intention to establish a formal partnership between the newly-formed US AI safety institute and the United Kingdom Safety Institute. This international collaboration underscores the global significance of addressing AI safety challenges.
Check Out: SBI Card Collaborates with Reliance Retail to Launch 'Reliance SBI Card'
Key Responsibilities of the Institute
The AI safety institute, which will operate under the National Institute of Standards and Technology (NIST), will lead the US government’s endeavors in the realm of AI safety. Its primary focus will be on evaluating advanced AI models, with a particular emphasis on known and emerging risks. The institute’s mandate includes developing safety, security, and testing standards for AI models, creating authentication standards for AI-generated content, and providing research environments for the evaluation of emerging AI risks and mitigation of known impacts.
Check Out: Indian Navy Chief Proposes Framework for Strengthening Maritime Relations in the Indian Ocean Region
“The institute will facilitate the development of standards for safety, security, and testing of AI models, develop standards for authenticating AI-generated content, and provide testing environments for researchers to evaluate emerging AI risks and address known impacts.”
Biden’s AI Executive Order
This initiative aligns with President Joe Biden’s recent AI executive order, which requires developers of AI systems posing risks to national security, the economy, public health, or safety to share the results of safety tests with the US government. The order invokes the Defense Production Act, establishing the need for regulatory oversight and standards-setting for AI testing. It extends to addressing various risk categories, encompassing chemical, biological, radiological, nuclear, and cybersecurity domains.