The UK government’s AI Safety Summit raised the profile of AI in the UK and internationally. The primary focus was on biosecurity and cybersecurity as the existential threats posed by GenAI, particularly the frontier models developed by large technology firms, primarily in the US. 

In the same week, Kamala Harris’s London speech on AI Safety clearly affirmed the US definition of a second, and significantly wider, category of existential risks which explicitly included bias, discrimination, fake news, and misinformation. With over 2 billion people voting in the next 12 months, misinformation poses a significant and urgent threat, particularly for the top three democracies going to the polls this year.

However, there is a third category of AI existential risk that is particularly important for the UK, considering the lack of productivity growth in the last decade – and stemming in part from a lack of concerted focus on AI adoption.

Government action needed

While the UK now ranks fourth globally in AI, this is largely due to the strength of its talent pool, R&D capabilities and AI startup community. However, the UK ranks much lower in operating environment, infrastructure and government strategy.

The government has taken some noteworthy steps recently with renewed emphasis on hiring private sector talent from AI, data, and digital backgrounds, appointing an AI Minister in every government department, hiring an AI ‘hit squad’ of 30 people and holding AI-focused hackathons. This included one to improve call centre efficiency and getting better value for public contracts.  Some businesses have seen 25%+ reductions in call centre times from AI initiatives.

However, these are small steps in the context of the potential opportunity across government departments and the UK economy. Further actions should be considered including the appointment of a national head for AI adoption to programmatically scale AI, like the temporary appointment of the government’s head of AI safety.

Bigger prize

It is also critical to embrace and capture a broader definition of AI value that extends beyond the current narrow focus on AI startups and tech company valuations. There is a much larger prize, for the public and private sectors, in driving faster adoption across the health service, education sector, policing, HMRC, and UK Plc as a whole. 

Some enlightened FTSE chairs – Pets at Home, for example – have recognised the importance of diversifying their boards and have specifically targeted NEDs with applied AI expertise. More UK private and public sector boards need to follow suit and pay more attention to AI adoption, in addition to the commendable focus on AI safety.

Learn from the best: 6 pieces of advice from leading tech CEOs

SMEs

This is also highly relevant to small and medium sized enterprises (SMEs). The G7 Productive Business Index in summer 2023, which compares the productivity of SMEs in G7 countries, placed the UK at second from the bottom. It found that “UK businesses are under-indexing on performance, and investment and improvement in capabilities linked to productivity… If every small employing business were able to maintain 1% improvements over a five-year period, this would add £94 billion to the UK economy annually. That’s the equivalent of over half the annual budget for NHS England”.

In the healthcare sector, some estimate that we will face a mounting shortage of over 350,000 healthcare professionals in the next few years. AI presents opportunities in multiple areas, such as augmenting existing tasks in note taking and summarisation, developing self-care capabilities or assisting in radiology, where we have an acute shortage of staff. Startups like Suvera are also helping to improve productivity across GP’s in managing diabetes and blood pressure patients on their behalf using AI powered platforms.

AI acceptance

AI literacy is another key ingredient and is positively correlated with AI acceptance, suggesting that people who have a better understanding of AI are more likely to accept it.  Inclusion of all members of civil society will accelerate adoption and minimise risks of bias and discrimination.

One of the interesting observations with AI is that once it becomes accepted by the public it is no longer viewed as AI – think of how spam filters, recommendation engines, search engines and other AI driven products like transcription (subtitles) and translation tools are no longer thought of as AI.

Whilst caution is necessary with brittle GenAI, which has garnered most recent attention and is prone to hallucinations, we have many other AI technologies that are safely driving major productivity gains, many of which have been in existence for some considerable time.  GenAI is also increasingly being safely deployed across many sectors and will drive enormous long-term value.

Hopefully the next UK AI Summit will also focus on productivity, growth, value delivery and how AI can augment, rather than replace, roles to improve the UK’s competitiveness for years to come.

Has the EU AI Act stolen the UK’s thunder?