The world’s first AI Safety Institute has been launched in UK, tasked with testing the safety of emerging types of artificial intelligence.

The new global hub was backed by leading AI companies and nations as the first global AI Safety Summit concluded at Bletchley Park.

The UK government said it had spent four months building a team that can evaluate the risks of ‘frontier’ AI models. The Frontier AI Taskforce will now evolve to become the AI Safety Institute, with Ian Hogarth continuing as its chair

The external advisory board for the Taskforce, made up of industry heavyweights from national security to computer science, will now advise the new global hub.

Heavyweights to power UK’s Frontier AI Taskforce

The Institute will carefully test new types of frontier AI before and after they are released to address the potentially harmful capabilities of AI models, including exploring all the risks, from social harms like bias and misinformation, to the most unlikely but extreme risk, such as humanity losing control of AI completely. 

In undertaking this research, the AI Safety Institute will look to work closely with the Alan Turing Institute, as the national institute for data science and AI.

World leaders and major AI companies expressed their support for the Institute. Leading researchers at the Alan Turing Institute and Imperial College London have also welcomed the Institute’s launch, alongside representatives of the tech sector in techUK and the Startup Coalition.

Already, the UK has agreed two partnerships: with the US AI Safety Institute, and with the government of Singapore to collaborate on AI safety testing – two of the world’s biggest AI powers.

“Our AI Safety Institute will act as a global hub on AI safety, leading on vital research into the capabilities and risks of this fast-moving technology,” said Prime Minister Rishi Sunak.

“It is fantastic to see such support from global partners and the AI companies themselves to work together so we can ensure AI develops safely for the benefit of all our people. This is the right approach for the long-term interests of the UK.”

Ahead of new powerful models expected to be released next year whose capabilities may not be fully understood, its first task will be to quickly put in place the processes and systems to test them before they launch – including open-source models.

From its research informing UK and international policymaking, to providing technical tools for governance and regulation – such as the ability to analyse data being used to train these systems for bias – the government said it was taking action to make sure AI developers are not ‘marking their own homework’ when it comes to safety.

In a statement on testing, governments and AI companies recognised that both parties have a crucial role to play in testing the next generation of AI models.

Bristol handed £225m to build AI supercomputer

The countries represented at Bletchley have also agreed to support Professor Yoshua Bengio, a Turing Award winning AI academic and member of the UN’s Scientific Advisory Board, to lead the first-ever frontier AI ‘State of the Science’ report. This will provide a scientific assessment of existing research on the risks and capabilities of frontier AI and set out the priority areas for further research to inform future work on AI safety.

The findings of the report will support future AI Safety Summits, plans for which have already been set in motion. The Republic of Korea has agreed to co-host a mini virtual summit on AI in the next six months. France will then host the next in-person summit a year from now.  

AI Safety Institute chair Hogarth said: “The support of international governments and companies is an important validation of the work we’ll be carrying out to advance AI safety and ensure its responsible development.

“Through the AI Safety Institute, we will play an important role in rallying the global community to address the challenges of this fast-moving technology.”

Bengio added: “The safe and responsible development of AI is an issue which concerns every one of us. We have seen massive investment into improving AI capabilities, but not nearly enough investment into protecting the public, whether in terms of AI safety research or in terms of governance to make sure that AI is developed for the benefit of all. 

“I am pleased to support the much-needed international coordination of managing AI safety, by working with colleagues from around the world to present the very latest evidence on this vitally important issue.”

Nations sign Bletchley Declaration at AI Safety Summit