Marc Warner, CEO of London’s Faculty AI, views the new organization as an international standard setter

The UK should focus on setting global standards for artificial intelligence testing instead of attempting to conduct all evaluations itself, according to a company assisting the government’s AI Safety Institute.

Marc Warner, CEO of Faculty AI, stated that the newly established institute might be responsible for scrutinizing various AI models—such as the technology behind chatbots like ChatGPT—due to the government’s leading work in AI safety.

Rishi Sunak announced the formation of the AI Safety Institute (AISI) last year ahead of the global AI safety summit, which secured a commitment from major tech companies to collaborate with the EU and 10 countries, including the UK, US, France, and Japan, on testing advanced AI models before and after deployment.

The UK plays a significant role in the agreement due to its advanced work in AI safety, as highlighted by the establishment of the institute.

Warner, whose London-based company has contracts with the UK institute to assist in testing AI models to ensure they comply with safety guidelines, emphasized that the institute should be a global leader in setting testing standards.

“I believe it’s crucial for the institute to establish standards for the broader world, rather than attempting to handle everything internally,” he stated.

Warner, whose company also works with the NHS on Covid and the Home Office on countering extremism, commended the institute for its “excellent start” and noted, “I have never seen government initiatives progress as quickly as this.”

He also noted that “the technology is advancing rapidly.” He suggested that the institute should establish standards that other governments and companies can adopt, such as “red teaming,” where experts simulate the misuse of an AI model, rather than handling all the tasks itself.

Warner expressed concern that the government might end up “red teaming everything,” leading to a backlog where they lack the capacity to assess all models promptly.

Speaking about the institute’s potential to establish international standards, he remarked, “They can establish excellent standards that other governments, other companies… can use for red teaming. So, it’s a much more scalable, long-term vision for ensuring the safety of these technologies.”

Warner talked to the Guardian shortly before AISI provided an update on its testing program last week and acknowledged that the institute did not have the capacity to test “all released models” and would concentrate only on the most advanced systems.

The Financial Times reported last week that major AI companies are urging the UK government to accelerate its AI systems’ safety testing. Signatories to the voluntary testing agreement include Google, OpenAI (the developer of ChatGPT), Microsoft, and Mark Zuckerberg’s Meta.

The US has also announced an AI safety institute that will participate in the testing program announced at the summit in Bletchley Park. The Biden administration recently announced a consortium to support the White House in achieving the goals outlined in its October executive order on AI safety, which include creating guidelines for watermarking AI-generated content. Members of the consortium, which will be housed under the US institute, include Meta, Google, Apple, and OpenAI.

The UK’s Department for Science, Innovation, and Technology stated that governments worldwide “need to play a key role” in testing AI models.

“The UK is leading this effort with the world’s first AI Safety Institute, which is conducting assessments, research, and information sharing, advancing the collective understanding of AI safety globally,” a spokesperson stated. “The institute’s ongoing work will further guide policymakers worldwide on AI safety.”

By admins

Leave a Reply

Your email address will not be published. Required fields are marked *