The UK’s AI Safety Institute will open a base in San Francisco this summer.
The institute, founded a year ago with the promise of an initial £100 million of funding, currently has 30 staff dedicated to ‘minimising surprise to the UK and humanity from rapid and unexpected advances in artificial intelligence’.
As well as launching its first overseas base – which Technology Secretary Michelle Donelan said would tap into Silicon Valley’s tech talent – it will expand its London headquarters.
“This expansion represents British leadership in AI in action,” said Donelan. “It is a pivotal moment in the UK’s ability to study both the risks and potential of AI from a global lens, strengthening our partnership with the US and paving the way for other countries to tap into our expertise as we continue to lead the world on AI safety.
“Since the Prime Minister and I founded the AI Safety Institute, it has grown from strength to strength and in just over a year, here in London, we have built the world’s leading Government AI research team, attracting top talent from the UK and beyond.
“Opening our doors overseas and building on our alliance with the US is central to my plan to set new, international standards on AI safety which we will discuss at the Seoul Summit this week.”
The AI Safety Institute has already agreed an alliance with the United States on AI safety.
Founder-friendly blueprint launched for UK software spinouts
Following the first AI Safety Summit at Bletchley Park last year, the second two-day summit begins today in South Korea and is co-hosted by the UK.
Research supported by over 30 nations, as well as representatives from the EU and the UN, was published last week showing the impact AI could have if governments and wider society fail to deepen their collaboration on AI safety.
The first iteration of the International Scientific Report on the Safety of Advanced AI was one of the key commitments to emerge from the Bletchley Park discussions. It aims to give policymakers across the globe a single source of information to inform their approaches to AI safety.
The report recognises that advanced AI can be used to boost wellbeing, prosperity and new scientific breakthroughs, but notes that current and future developments could result in harm.
It also highlights a lack of universal agreement among AI experts on a range of topics, including both the state of current AI capabilities and how these could evolve over time. It also explores the differing opinions on the likelihood of extreme risks which could impact society such as large-scale unemployment, AI-enabled terrorism, and a loss of control over the technology.
The interim publication is focused on advanced ‘general-purpose’ AI. This includes state of the art AI systems which can produce text, images, and make automated decisions. The final report is expected to be published in time for the AI Action Summit which is due to be hosted by France, but will now take on evidence from industry, civil society, and a wide range of representatives from the AI community.
Professor Yoshua Bengio, chair of the International Scientific Report on the Safety of Advanced AI, said: “This report summarises the existing scientific evidence on AI safety to date, and the work led by a broad swath of scientists and panel members from 30 nations, the EU and the UN over the past six months will now help inform the next chapter of discussions of policy makers at the AI Seoul Summit and beyond.
“When used, developed and regulated responsibly, AI has incredible potential to be a force for positive transformative change in almost every aspect of our lives. However, because of the magnitude of impacts, the dual use and the uncertainty of future trajectories, it is incumbent on all of us to work together to mitigate the associated risks in order to be able to fully reap these benefits.
“Governments, academia, and the wider society need to continue to advance the AI safety agenda to ensure we can all harness AI safely, responsibly, and successfully.”