Cybersecurity

The European Union is pressing ahead with its flagship AI Act governing the safe and transparent application of artificial intelligence.

The EU is proposing a full ban on AI for biometric surveillance, emotion recognition and predictive policing, while generative AI systems like ChatGPT must disclose that resulting content is AI-generated.

AI systems used to influence voters in elections are also considered to be high-risk.

The rules aim to promote the uptake of human-centric and trustworthy AI and protect the health, safety, fundamental rights and democracy from its harmful effects, the European Parliament said.

It added that this negotiating position had attracted 499 votes in favour, 28 against and 93 abstentions ahead of talks with EU member states on the final shape of the law. 

“The rules would ensure that AI developed and used in Europe is fully in line with EU rights and values including human oversight, safety, privacy, transparency, non-discrimination and social and environmental wellbeing,” it said.

“AI systems with an unacceptable level of risk to people’s safety would therefore be prohibited, such as those used for social scoring (classifying people based on their social behaviour or personal characteristics).”

Which UK city is the most AI-ready?

MEPs are proposing bans on what them deem to be intrusive and discriminatory uses of AI, such as ‘real-time’ remote biometric identification systems in publicly accessible spaces; ‘post’ remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorisation; biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation); predictive policing systems (based on profiling, location or past criminal behaviour); emotion recognition systems in law enforcement, border management, the workplace, and educational institutions; and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases, violating human rights and right to privacy.

AI systems used to influence voters and the outcome of elections and in recommender systems used by social media platforms with over 45 million users were added to the high-risk list.

Generative AI providers would also have to help distinguish ‘deep-fake’ images from real ones and ensure safeguards against generating illegal content. 

To boost AI innovation and support SMEs, MEPs added exemptions for research activities and AI components provided under open-source licences. 

The new law promotes so-called regulatory sandboxes, or real-life environments, established by public authorities to test AI before it is deployed.

Francesca Porter, general council at identification tech provider Onfido, welcomed the approach.

“The decision to exclude secure and fraud preventative biometric solutions from the ‘high risk’ category will enable the deployment of AI systems that do not pose a threat to citizens and businesses but instead make life easier,” she explained.

Meanwhile, UK regulator the Information Commissioner’s Office is calling for businesses to address the privacy risks generative AI can bring before rushing to adopt the technology – with tougher checks on whether organisations are compliant with data protection laws.

£54m boost to develop ‘secure and trustworthy’ AI research