US, UK, Other Countries Sign Agreement To Make AI 'Secure By Design'

In addition to the US and UK, the 18 countries that signed on to the new guidelines include Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore.

US, UK, Other Countries Sign Agreement To Make AI 'Secure By Design'

The United States, the United Kingdom, and more than a dozen other countries unveiled what a senior US official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors on Sunday, urging companies to develop AI systems that are "secure by design."

The 18 countries agreed in a 20-page statement released on Sunday that firms building and utilizing AI must develop and deploy it in a way that protects consumers and the general public from exploitation.

The agreement is non-binding and consists largely of broad advice such as monitoring AI systems for misuse, securing data from manipulation, and evaluating software vendors.

Still, Jen Easterly, head of the United States Cybersecurity and Infrastructure Security Agency, said it was vital that so many governments signed on to the premise that AI systems must prioritize safety.

"This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs," Easterly said in a Reuters interview, adding that the recommendations represent "an agreement that the most important thing that needs to be done at the design phase is security."

The pact is the latest in a series of moves by governments across the world to control the development of AI, the weight of which is increasingly felt in industry and society at large.

Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore are among the 18 nations that have signed on to the new standards, in addition to the United States and the United Kingdom.