The UK government has signed a deal with American AI firm Anthropic to help grow the economy.
Anthropic is a public-benefit startup founded in 2021. The government said the partnership is the work of the UK’s new Sovereign AI unit, and will see both sides working closely together to realise the technology’s opportunities.
It added that there would be a continued focus on the responsible development and deployment of AI systems.
This week the UK joined the US in refusing to sign an international declaration – signed by 60 countries including France, China and India – which promotes the inclusive and sustainable development of AI.
It said the statement “did not reflect the UK’s policy positions on opportunity and security”.
The partnership with Anthropic will include sharing insights on how AI can transform public services, improve the lives of citizens and drive scientific breakthroughs.
“We look forward to exploring how Anthropic’s AI assistant Claude could help UK government agencies enhance public services, with the goal of discovering new ways to make vital information and services more efficient and accessible to UK residents,” said Dario Amodei, CEO and co-founder of Anthropic.
The government said the UK will also look to secure further agreements with leading AI companies as a key step towards turbocharging productivity and sparking fresh economic growth.
It also announced that the AI Safety Institute will become the UK AI Security Institute and seek to strengthen protections against the risks AI poses to national security and crime.
Speaking at the Munich Security Conference, Technology Secretary Peter Kyle sid the focus will be on tackling issues such as how AI might be used to develop chemical and biological weapons, carry out cyber-attacks and enable crimes such as fraud and child sexual abuse.
The Institute will also partner across government, including with the Defence Science and Technology Laboratory, the Ministry of Defence’s science and technology organisation, to assess the risks posed by frontier AI.
The Institute will also launch a new criminal misuse team which will work jointly with the Home Office to conduct research on a range of crime and security issues which threaten to harm British citizens.
One such area of focus will be the use of AI to make child sexual abuse images, with this new team exploring methods to help to prevent abusers from harnessing the technology to carry out their appalling crimes. This will support work announced earlier this month to make it illegal to own AI tools which have been optimised to make images of child sexual abuse.
The announcement comes weeks after the government set out a blueprint for AI ‘to deliver a decade of national renewal’.
“The changes I’m announcing today represent the logical next step in how we approach responsible AI development – helping us to unleash AI and grow the economy as part of our Plan for Change,” said Kyle.
“The work of the AI Security Institute won’t change, but this renewed focus will ensure our citizens – and those of our allies – are protected from those who would look to use AI against our institutions, democratic values, and way of life.
“The main job of any government is ensuring its citizens are safe and protected, and I’m confident the expertise our Institute will be able to bring to bear will ensure the UK is in a stronger position than ever to tackle the threat of those who would look to use this technology against us.”
Chair of the AI Security Institute Ian Hogarth said: “The Institute’s focus from the start has been on security and we’ve built a team of scientists focused on evaluating serious risks to the public.
“Our new criminal misuse team and deepening partnership with the national security community mark the next stage of tackling those risks.”