The devastating misuse of artificial intelligence by rogue states, criminals and lone-wolf attackers has been foreshadowed by experts.
Twenty-five technical and public policy researchers from Cambridge, Oxford and Yale universities, alongside privacy and military experts, said that within five years it’s plausible that AI will advance the threats to digital, physical and political security by allowing for large-scale, finely targeted, highly efficient attacks.
“We all agree there are a lot of positive applications of AI,” said Miles Brundage, a research fellow at Oxford’s Future of Humanity Institute. “There was a gap in the literature around the issue of malicious use.”
The report also highlights AI’s unique ability to learn and impersonate human faces and speech, a skill which could be used to do more political damage than any single physical attack by car or drone.
There is a growing body of academic research about the security risks which AI poses. This report has called upon governments, policy, and technical experts to collaborate and defuse the imminent political and physical dangers.
Since beginning the paper in 2017, some of the researchers’ predictions have already come true: they warned that AI could be used to create highly realistic fake audio and video of public officials for propaganda purposes, a technique since demonstrated by University of Washington researchers, who created a highly-realistic video of President Obama.
“We ultimately ended up with a lot more questions than answers,” said Brundage.