First-of-its-kind international agreement on AI Safety introduced by the US and allies
- In collaboration with Britain and 18 other nations, the US has unveiled the inaugural comprehensive international non-binding agreement aimed at delivering AI safety.
- The 20-page document stresses the importance of AI systems being “secure by design.”
- The guidelines came after the UK’s Bletchley Declaration and an AI ‘Safety Testing’ agreement.
In the swiftly advancing realm of AI, prioritizing safety and responsible development has become increasingly imperative. As the UK’s National Cyber Security Centre (NCSC) CEO, Lindy Cameron, says, “We cannot rely on our ability to retrofit security into the technology in the years to come nor expect individual users to carry the burden of risk solely. As we develop the technology, we must build security as a core requirement.”
She means that people, organizations and governments often recognize that security is frequently a secondary concern when development is rapid. In her keynote address at Chatham House’s Cyber 2023 conference on June 14, Cameron highlighted how a substantial portion of today’s digital architecture lacks a foundational focus on security. It was constructed on inherently flawed and vulnerable bases.
“And unless we act now, we risk building a similarly flawed ecosystem for AI,” she said, adding that AI developers must predict possible attacks and identify ways to mitigate them. “Failure to do so will risk designing vulnerabilities into future AI systems.”
Fast forward five months, and the UK published its first global guidelines to ensure the secure development of AI technology.
Following the UK’s leadership in AI safety, 17 other countries have pledged to support and jointly endorse the new guidelines released on Sunday. The countries that have signed the guidelines include Australia, Canada, Chile, Czechia, Estonia, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, Poland, the Republic of Korea, Singapore, and the US.
Who came up with the AI ‘secure by design’ guidelines?
The 20-page document is a global first of its kind. The guidelines are designed to help developers of AI-utilizing systems make well-informed cybersecurity decisions throughout development. It applies whether the plans are developed from the ground up or constructed using tools and services provided by external sources.
The guidelines were collaboratively crafted by the UK’s NCSC, a division of Government Communications Headquarters (GCHQ), and the US Cybersecurity and Infrastructure Security Agency (CISA), working alongside industry experts and 21 other international agencies and ministries worldwide. The inclusive effort also involved participants from G7 member nations and representatives from the Global South.
“The guidelines help developers ensure that cybersecurity is both an essential pre-condition of AI system safety and integral to the development process from the outset and throughout, known as a ‘secure by design’ approach,” NCSC said in a blog post. Cameron also emphasized that these guidelines mark “a significant step in shaping a truly global, common understanding of the cyber-risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.”
While the agreement is not legally binding, it consists of broad recommendations. These suggestions include overseeing AI systems for potential misuse, safeguarding data against tampering, and thoroughly evaluating software suppliers. Therefore, according to Jen Easterly, the director of the US Cybersecurity and Infrastructure Security Agency, it is significant that numerous countries have endorsed the principle of prioritizing safety in AI systems.
“This is the first time we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs,” Easterly told Reuters, saying the guidelines represent “an agreement that the most important thing that needs to be done at the design phase is security.”
‘Bletchley Declaration’ on AI safety
Earlier this month, the UK hosted the inaugural ‘AI Safety Summit’ at Bletchley Park, the world’s first international summit on AI safety. Attended by representatives from international governments, leading multinational technology companies, and industry experts, the significant outcome of the meeting was the endorsement of the ‘Bletchley Declaration,’ an international agreement between 29 governments.
This declaration affirmed their joint commitment to developing AI in a manner that prioritizes safety, human-centric principles, trustworthiness, and responsibility. Noteworthy signatories included the UK, US, China, and significant European member states such as France, Germany, Italy, Spain, and the EU.
An essential element of the declaration centers on “frontier AI,” powerful general-purpose AI models that could present significant risks, especially in cybersecurity and biotechnology. The agreement underscores the growing need for comprehending and mitigating these risks through global cooperation.
It advocates for creating “risk-based policies” and developing “appropriate evaluation metrics, safety testing tools, and relevant public sector capabilities and scientific research.” Alongside the Bletchley Declaration, another summit outcome was an agreement on AI ‘Safety Testing,’ formalized in a policy paper.
Ten countries, including the UK, the US, and key European member states, and prominent technology companies endorsed that agreement. The policy paper outlines a comprehensive framework for testing next-generation AI models by government agencies, fostering international collaboration.
It also encourages government agencies to enhance their public sector testing capacity and formulate their own approaches to AI safety regulation.
READ MORE
- 3 Steps to Successfully Automate Copilot for Microsoft 365 Implementation
- Trustworthy AI – the Promise of Enterprise-Friendly Generative Machine Learning with Dell and NVIDIA
- Strategies for Democratizing GenAI
- The criticality of endpoint management in cybersecurity and operations
- Ethical AI: The renewed importance of safeguarding data and customer privacy in Generative AI applications