The government’s AI whitepaper contains a lot of political statements.
The first page talks about AI helping ‘NHS heroes’ and by the second, AI will allow more bobbies on the beat. Including page titles, the term ‘pro-innovation’ is used 129 times. But there is, as yet, not enough thought or substance behind the headlines.
The whitepaper has said that there will not be an independent AI regulator; this, it is said, would ‘introduce complexity and confusion’. Instead, existing regulators, like the Information Commissioner’s Office, the Equality and Human Rights Commission and the Financial Conduct Authority will be tasked with applying five principles: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.
As principles these are not objectionable, but without details are so vague as to be meaningless. Every one of these principles is as certain and definitive as the length of a piece of string.
The whitepaper also says that a central risk function will be established ‘to bring coherence to the way regulators and industry think about AI risk’. However, no new statute is currently envisaged. Businesses thrive with certainty; to encourage UK companies to use AI, we need more clarity about the rules.
To illustrate this, let us take an example from the whitepaper of an InsurTech wanting to launch AI-driven insurance products. The whitepaper says that the current requirements are a ‘patchwork’, but that in the future, ‘we could expect to see joint guidance produced collaboratively’ by various regulators.
In our experience, advising on current laws is possible. It is true that new technologies always pose new questions, but that is always the case. The bigger problem is that everyone expects current laws to evolve, and we do not know how.
All this paper says is that regulators will work together based on vague principles to provide new guidance about AIs. But advising a client that in the future it will need to be ‘fair’ or have ‘appropriate transparency’ when using AI provides no useful guidance.
UK reacts to ‘uncertain’ AI plan as global experts sound alarm
Businesses in the EU are getting greater clarity. The EU’s draft AI Act will not be perfect, but it sets out obligations on AI owners and users in much greater detail. Most obligations are only on ‘high-risk’ AI. So, for example, risk assessment and pricing of life and health insurance is ‘high-risk AI’, but other insurance products may not be.
The obligations relating to ‘high-risk’ AI are set out in some detail. The classifications and obligations will evolve, and some requirements will require judgment calls; but at least the broad principles are there.
The UK has been slow about updating laws relating to AI. AIs are trained on data; GPT-3 was trained on 45 terabytes of data, scraped from the Internet, Wikipedia, books and elsewhere. Training will require data to be copied; and copying a copyright work without a licence or defence is infringement.
Various countries, including the EU, have implemented a ‘text and data mining’ exception to allow AIs to be trained without infringing. The UK doesn’t have one, so an AI lab wanting to train an AI may currently choose to train in Europe rather than in the UK. Last summer, the government said that it was going to implement a text and data mining exception that was more permissive than the EU one. They have since rowed back, and the latest whitepaper does not mention it. Some clarity on this soon would help.
As another example, the UK is doing some brilliant things with autonomous vehicles – we have some genuinely innovative businesses. The Law Commission published a report last year which called for a new Automated Vehicle Act. This was welcomed by industry. However, no bill has yet been brought to Parliament, apparently due to a shortage of parliamentary time.
AI is changing the world, and it raises complex and fundamental questions. Sector experts like Yoshua Benigo and Gary Marcus say that AI research should be paused whilst risks are assessed; Yann LeCun and Bill Gates disagree. Goldman Sachs warns that 300m jobs are at risk; Christopher Pissarides, a Nobel Prize winner, says that it may allow a four-day week.
There are a host of complex challenges and opportunities to come. We welcome the government’s commitment to engaging and promoting this sector and the potential it brings. But we need new wisdom for this new age; amongst such radical change, careful but rapid investment of thought, time and expertise is required.
A genuinely pro-innovation approach requires new laws and guidance to allow clarity whilst protecting consumers.