UK Information Commissioner John Edwards has warned that 2024 could be the year people lose trust in artificial intelligence.

Edwards, who leads the independent regulator for data protection and privacy, called on tech developers to embed privacy into their products from the very start.

Referring to research which shows people are growing ever more nervous of AI, Edwards set out the steps the ICO has taken to support businesses using the smart technology and made it clear that there are no excuses for ‘bad actors’ who do not comply with data protection laws.

Delivering the keynote address at techUK’s Digital Ethics Summit 2023, he warned: “If people don’t trust AI, then they’re less likely to use it, resulting in reduced benefits and less growth or innovation in society as a whole. 

“This needs addressing. 2024 cannot be the year that consumers lose trust in AI.”

Edwards acknowledged the important role AI has for business, from providing new innovations and improving customer service to quicker resolutions for common technical issues – but said these benefits cannot be at the expense of people’s privacy.

Where the ICO finds error, it will take action, he warned: “Our existing regulatory framework allows for firm and robust regulatory intervention as well as innovation.”

UK rejoins EU’s £80bn Horizon research programme

He added: “We know there are bad actors out there who aren’t respecting people’s information and who are using technology like AI to gain an unfair advantage over their competitors. Our message to those organisations is clear – non-compliance with data protection will not be profitable. 

“Persistent misuse of customers’ information, or misuse of AI in these situations, in order to gain a commercial advantage over others will always be viewed negatively by my office. Where appropriate, we will seek to impose fines commensurate with the ill-gotten gains achieved through non-compliance.”

Edwards also set out his expectations of the industry and highlighted the help already available from his office including AI guidance and an innovation advice service and sandbox. 

“Privacy and AI go hand in hand – there is no either/or here. You cannot expect to utilise AI in your products or services without considering privacy, data protection and how you will safeguard people’s rights. 

“There are no excuses for not ensuring that people’s personal information is protected if you are using AI systems, products or services.”

How to humanise your tech brand – & why it really matters