The UK government has pledged £8.5 million of grant funding for technologies which protect society against threats from artificial intelligence.

At the AI Seoul Summit, which is co-hosted by the UK and the Republic of Korea, Technology Secretary Michelle Donelan said researchers will be backed financially to study how to protect society from AI risks such as deepfakes and cyberattacks, as well as helping to harness its benefits.

The most promising proposals will be developed into longer-term projects and could receive further funding.

The programme will be led within the UK government’s pioneering AI Safety Institute by Shahar Avin, an AI safety researcher who will be joining the UK’s Institute on secondment and Christopher Summerfield, UK AI Safety Institute Research Director. 

The research programme will be delivered in partnership with UK Research and Innovation and The Alan Turing Institute and the UK AI Safety Institute will aim to collaborate with other AI Safety Institutes internationally. Applicants will need to be based in the UK but will be encouraged to collaborate with other researchers from around the world. 

The UK government’s pioneering AI Safety Institute is leading the world in the testing and evaluation of AI models, advancing the cause of safe and trustworthy AI. Earlier this week, the AI Safety Institute released its first set of public results from tests of AI models. It also announced a new office in the US and a partnership with the Canadian AI Safety Institute – building on a landmark agreement with the US earlier this year.

UK’s AI Safety Institute to open San Francisco base

The new grants programme is designed to broaden the Institute’s remit to include the emerging field of ‘systemic AI safety’, which aims to understand how to mitigate the impacts of AI at a societal level and study how our institutions, systems and infrastructure can adapt to the transformations this technology has brought about.   

Examples of proposals within scope would include ideas on how to curb the spread of fake images and misinformation by intervening on the platforms that spread them, rather than on the AI models that generate them.

“When the UK launched the world’s first AI Safety Institute last year, we committed to achieving an ambitious yet urgent mission to reap the positive benefits of AI by advancing the cause of AI safety,” said Donelan.

“With evaluation systems for AI models now in place, Phase 2 of my plan to safely harness the opportunities of AI needs to be about making AI safe across the whole of society. 

“This is exactly what we are making possible with this funding which will allow our Institute to partner with academia and industry to ensure we continue to be proactive in developing new approaches that can help us ensure AI continues to be a transformative force for good. 

“I am acutely aware that we can only achieve this momentous challenge by tapping into a broad and diverse pool of talent and disciplines, and forging ahead with new approaches that push the limit of existing knowledge and methodologies.”

Summerfield added: “This new programme of grants is a major step towards ensuring that AI is deployed safely into society.

“We need to think carefully about how to adapt our infrastructure and systems for a new world in which AI is embedded in everything we do. This programme is designed to generate a huge body of ideas for how to tackle this problem, and to help make sure great ideas can be put into practice.”

Raspberry Pi confirms IPO & cornerstone investors