Leading figures in artificial intelligence have called for a six-month pause on development due to concerns over “profound risks to society and humanity”.
A letter issued by non-profit Future of Life Institute has been signed by more than 1,300 people including Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, AI research giants Yoshua Benigo and Stuart Russell, plus engineers and researchers at Microsoft, Google, DeepMind, Amazon and Meta.
Following the spectacular rise of Open AI’s ChatGPT – launched late last year and capable of writing impressive copy and code in response to simple commands – Google is promoting its own tool, Bard, while other systems also race to gain a foothold as more businesses look to harness the power of automation.
The increasing adoption has led to concerns over whether AI trained upon a narrow cross-section of society could lead to inherent biases – for example in the assessment of the worthiness of loan or mortgage applications – as well as the future risks it could pose to people’s privacy, human rights and safety.
The letter from the Future of Life Institute – funded by the Musk Foundation, London altruism group Founders Pledge and Silicon Valley Community Foundation – calls for a halt to the “dangerous race” of AI system development.
“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” it read. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.
“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.
“These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.”
No dedicated regulator or legislation
A UK government whitepaper, published yesterday, set out a new approach to regulating AI to drive ‘responsible innovation’ and ‘maintain public trust in the technology’.
Empowering existing regulators – such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority – to come up with tailored, context-specific approaches that suit the way AI is being used in their sectors, it sets out five principles: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.
Technology lawyer Ashley Williams, partner at Mishcon de Reya, criticised the whitepaper for not introducing new legislation or a dedicated regulator.
“It articulates the headache of allocating responsibility across existing supply chain actors within the AI lifecycle, and therefore proposes not to intervene at this stage,” he reflected. “Contracts will need to continue to do the heavy lifting of allocating responsibility.
“For many, this will stand in stark contrast to the EU’s rule-based approach. The proposed UK approach has some upsides, such as flexibility balanced with a pragmatic approach, but several downsides, most notably the continuing lack of certainty.
“For the UK approach to really work, it is important to acknowledge that some regulators will be under-resourced and lack AI experience to really deliver. Others may be too heavy-handed in their approach without a clear steer on how they should implement the framework.
“Supporting regulators will be critical in making this approach workable and ensuring specific sector guidance is issued in a timely manner with real cooperation across the regulators. Regulators will be supported by a centralised function which will require substantive investment in terms of resource and expertise.”
Edward Machin, a senior lawyer in the data, privacy and cybersecurity practice at Ropes & Gray, expressed a lack of surprise at the “pro-business” approach which ties in with the government’s “post-Brexit policy-making”.
“The government’s decision to rely on a regulatory framework rather than introducing new legislation governing the use of AI means that the UK risks swimming against the tide of global sentiment as Europe, China and United States all start to put in place strict laws that assess AI technologies based on the risks they pose to individuals rather than their business benefits,” he said.
Innovation boost
Andreas Rindler, managing director of private equity at BCG, was more positive.
“There is a clear need for continued investment into specific emerging technology areas such as artificial intelligence. This is critical if the UK’s tech sector wants to compete as these nascent technologies will hold the answers to many of the future’s challenges,” he said.
“It is exciting to see the UK push for growth. Avoiding giving responsibility for AI governance to a single regulator means businesses can really push for innovation and take a better approach – enabling the sector to develop at pace with fair checks and balances – which will also maintain public trust.
“Technology innovations like ChatGPT or Bard AI can radically transform business models and whole industry sectors, and the UK needs to have a seat at the table to co-shape the future of our industries.”
Iván de Prado Alonso, head of AI at image bank Freepik, added: “Rather than imposing stringent regulation too soon, the initiative will provide the government with more flexibility to safeguard a technology delivering real social and economic benefits to people across the UK.
“ChatGPT and other AI tools dominating current headlines are tremendously powerful, with their widespread integration to a range of other technologies inevitable. In the creative sphere, we are seeing boundless opportunities for people to easily generate original images. For small businesses and aspiring entrepreneurs, their work becomes cheaper and easier.
“It is therefore not yet time for a top-down approach to AI regulation while its true benefits are still being realised.”