“Sputnik Moment” … “ Frontier AI” … At the AI Safety Summit hosted by the UK, since yesterday, nearly 30 countries have started to discuss the growing concern about the safety of AI, but also opportunities, and the steps that governments and businesses are taking to mitigate these risks. The summit is focusing on three key areas: developing technical solutions to AI safety problems, building public trust in AI, and ensuring that AI is used for good.
The UK Secretary of State for Science, Innovation and Technology, Michelle Donelan kicked-off the opening plenary session, stressing that humans are at the origin of AI and should stay in control of it, while unleashing its incredible potential. Then, all regions, including the US, China, India or the EU had the occasion to express their views on the development and the safety of AI. In speeches, they seem all aligned on the approach. In practice, we know it is a different story. Elon Musk was spotted amongst the audience.
The venue, at Bletchley Park, is also a symbol in itself as it was the home of the principal centre of Allied code-breaking during the Second World War (cf. A. Turing). Also, as reminded by the G.Raimondo (Secretary of Commerce / US), we should remember that, at some point 2/3 of the code breakers were women. It echoes with the challenge of inclusivity / avoiding bias of AI.
Similarly to the UK, the US have announced that they will create an AI Safety Institute, funded by the National Science Foundation and will focus on research into the safety of large language models (LLMs). They will work with academia and leading AI companies. They will pay a specific attention to Red Teaming, safety and cyber security measures that will be put in place. The NIST will be in charge of developing guidelines, tools, methods, protocols and best practices to facilitate the evolution of industry standards for developing or deploying AI in safe, secure, and trustworthy ways.
In a video, the king of England, Charles III, closed the session comparing AI as no less important as the discovery of electricity or the splitting of the atom. He stressed the imperative of nations, public and private sectors, to work together on shaping a desirable and safe AI future, with a sense of urgency, unity and collective strength.
On the second day of the summit, the UK Prime Minister, Rishi Sunak, announced the launch of the AI Safety Institute, based at the University of Oxford. The institute will be tasked with testing the safety of new AI technologies. The UK government will provide funding for the institute (£25 million), which will be led by Ian Hogarth. Yoshua Bengio will also play a leading role. The institute will focus on three key areas of research:
- The development of new methods for testing the safety of AI systems.
- The study of the ethical implications of AI.
- The development of educational programs to raise awareness of AI safety issues.
Sunak confirmed that AI companies had agreed to give governments early access to their models to perform safety evaluations. Western leading US enterprises such as Micrososft / OpenAI, Google / DeepMind or AWS / Anthropic. It will be very interesting to understand how this will be practically put in place and how the confidentially of source code will be handled.
High-level discussions and leading personalities were the first step in laying the foundation stone for public institutions to wake up to the extraordinary nature of AI and its ability to disrupt our models of society around the world.
At the end of the summit, there was an astonishing 50-minute conversation between Rishi Sunak, who for once asks the questions, and Elon Musk, whom we advise you to watch: