Skip to main contentSkip to navigationSkip to navigation
A man passes by alarge banner for the AI Seoul summit in Seoul, South Korea, on Monday.
Sunak will also co-chair a virtual meeting of world leaders on ‘innovation and inclusivity’ in AI with the Korean president. Photograph: Lee Jin-man/AP
Sunak will also co-chair a virtual meeting of world leaders on ‘innovation and inclusivity’ in AI with the Korean president. Photograph: Lee Jin-man/AP

First companies sign up to AI safety standards on eve of Seoul summit

Rishi Sunak says 16 international firms have committed, but standards have been criticised for lacking teeth

The first 16 companies have signed up to voluntary artificial intelligence safety standards introduced at the Bletchley Park summit, Rishi Sunak has said on the eve of the follow-up event in Seoul.

The standards, however, have been criticised for lacking teeth, with signatories committing only to work toward information sharing, invest in cybersecurity and prioritise research into societal risks.

“These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI,” Sunak said. “It sets a precedent for global standards on AI safety that will unlock the benefits of this transformative technology.”

Among the 16 are Zhipu.ai from China, and the Technology Innovation Institute from the United Arab Emirates. The presence of signatories from countries that have been less willing to bind national champions to safety regulation is a benefit of the lighter touch, the government says.

The UK’s technology secretary, Michelle Donovan, said the Seoul event “really does build on the work that we did at Bletchley and the ‘Bletchley effect’ that we created afterwards. It really had the ripple effect of moving AI and AI safety on to the agenda of many nations. We saw that with nations coming forward with plans to create their own AI safety institutes, for instance.

“And what we’ve achieved in Seoul is we’ve really broadened out the conversation. We’ve got a collection from across the globe, highlighting that this process is really galvanising companies, not just in certain countries but in all areas of the globe to really tackle this issue.”

The longer the codes remained voluntary, however, the greater the risk was that AI companies would simply ignore them, said Fran Bennett, the interim director of the Ada Lovelace Institute.

“People thinking and talking about safety and security, that’s all good stuff. So is securing commitments from companies in other nations, particularly China and the UAE. But companies determining what is safe and what is dangerous, and voluntarily choosing what to do about that, that’s problematic.

“It’s great to be thinking about safety and establishing norms, but now you need some teeth to it: you need regulation, and you need some institutions which are able to draw the line from the perspective of the people affected, not of the companies building the things.”

Bennett also criticised the lack of transparency for training data. Even under the safety standards, companies are free to keep the data they train their models on completely secret, despite the risks known to come with biased or incomplete sources.

Donovan argued that AI safety institutes such as the one in the UK have enough access to make data transparency unnecessary. “If the argument is that training data can present risks, then what the institute can do is go through the model and see if that model can itself present a risk,” she said. “That’s a million times more than what we had just over six months ago, when it was all down to the company.”

OpenAI, another of the signatories to the standards, said they represented “an important step toward promoting broader implementation of safety practices for advanced AI systems.

skip past newsletter promotion

Anna Makanju, the company’s vice-president for global affairs, said “the field of AI safety is quickly evolving and we are particularly glad to endorse the commitments’ emphasis on refining approaches alongside the science”.

The presence of Chinese and Emirati signatories on the list is seen as vindicating Britain’s leadership in AI safety, the Guardian understands, because a US-led effort would have had little chance of being seen as neutral enough to attract support.

On Tuesday evening, Sunak co-chaired a closed virtual meeting of world and business leaders with the South Korean president, Yoon Suk Yeol. Attendees, including Kamala Harris, Emmanuel Macron, Meta’s Nick Clegg and Twitter owner Elon Musk, agreed on further co-operation to progress AI safety science, a Whitehall source said.

The full list of the companies to have signed up to the safety standards:

  • Amazon

  • Anthropic

  • Cohere

  • Google / Google DeepMind

  • G42

  • IBM

  • Inflection AI

  • Meta

  • Microsoft

  • Mistral AI

  • Naver

  • Open AI

  • Samsung Electronics

  • Technology Innovation Institute

  • xAI

  • Zhipu.ai

More on this story

More on this story

  • AI hardware firm Nvidia unveils next-gen products at Taiwan tech expo

  • Science Weekly
    Election risks, safety summits and Scarlett Johansson: the week in AI – podcast

  • Elon Musk’s xAI raises $6bn in bid to take on OpenAI

  • Scarlett Johansson’s OpenAI clash is just the start of legal wrangles over artificial intelligence

  • Seoul summit showcases UK’s progress on trying to make advanced AI safe

  • Productivity soars in sectors of global economy most exposed to AI, says report

  • OpenAI putting ‘shiny products’ above safety, says departing researcher

  • How China is using AI news anchors to deliver its propaganda

  • AI may accelerate job losses and carbon emissions, report finds

  • BT ramps up AI use to counter hacking threats to business customers

Most viewed

Most viewed