I was recently asked whether artificial intelligence poses a cybersecurity threat, or whether AI is instead a solution to cybersecurity vulnerabilities.
The truth is that AI is not merely one or the other. AI is, in fact, both a threat to and a solution for today’s cybersecurity concerns.
AI commonly appears in various forms, including the algorithms that power Google searches, Netflix recommendations and Amazon purchases. But AI can also be used for evil. It is the driving force and technology behind cyberattacks such as ransomware.
Unfortunately, many organisations don’t realise the vulnerabilities that come with leveraging AI, failing to adequately consider these risks and integrate cybersecurity into their overall strategies. For many, cybersecurity is simply an afterthought, rather than a well-planned defensive strategy.
To protect their organisations, CEOs and other key leaders need to understand the risks associated with using AI, as well as learn how to leverage the technology for good. This can be accomplished by taking the five steps outlined below.
Employ ‘Human-Machine Teaming’
AI is a powerful tool, but it cannot solve every problem, nor can it completely replace humans. My number one recommendation for organisations leveraging AI is to employ ‘human-machine teaming’ to get the most out of each.
Humans outperform computers and machines on some tasks such as judgement, common sense, and leadership. However, machines outperform humans on other tasks such as digesting large quantities of data, rapid computation and completing boring repetitive tasks. The key to successfully adopting AI is to combine humans and machines in a way that leverages the respective strengths of each.
Build multi-disciplinary AI teams
Leaders should also build multi-disciplinary AI teams. These teams should have executive-level support and include technical experts such as coders and data scientists, as well as lawyers and AI ethicists, to effectively integrate AI into their organisations.
Such teams would enable all parties to provide input from their respective perspectives at all stages in the adoption process. This approach has been leveraged by several large companies such as Microsoft and is a useful model for other organisations adopting AI. It is recommended that CEOs and other leaders empower their Chief Information Security Officers (CISOs) to leverage their cyber expertise.
And for further support to companies looking to benefit from AI technology, the UK government’s Centre for Data Ethics and Innovation (CDEI) has published a ‘roadmap’ designed to create an AI assurance industry to support the introduction of automated analysis, decision-making, and processes. This roadmap also provides companies with support and assurances that they are protected should harm occur while companies are utilising AI.
Plan for the worst
Following Russia’s invasion of Ukraine, the UK Secretary of State for Defence recently warned all organisations to strengthen their security and defence when it comes to cybersecurity because evolving intelligence indicates that the Russian Government is exploring options for potential cyberattacks. Every organisation – large and small – must be prepared to respond to disruptive cyber incidents.
Organisations must have a cyber response plan on the shelf, ready to go. If they wait until a cyberattack occurs to develop a response, it will be too late. What should this plan include? It is different for each company but the UK’s National Cyber Security Centre (NCSC) provides resources that can help companies prepare for the worst, including significantly expanded services to protect the UK from a record number of online scams, similar to the scam that a British energy company executive fell prey to in 2019 when he received a phone call from the CEO of his parent company.
The executive recognised the CEO’s voice, and the CEO told him to immediately wire €220,000 to their main supplier. The executive complied. Unfortunately, it wasn’t the CEO on the phone: it was a criminal using deepfake voice impersonation software, and the money was wired to cyber criminals.
Natalie A. Pierce, chair of the labour and employment practice at Gunderson Dettmer who made a significant contribution to this article, commented that an effective cybersecurity plan requires the use of cybersecurity software powered by AI. Cybercriminals use AI to rapidly identify and exploit vulnerabilities. Law-abiding organisations should use AI-powered cybersecurity tools because only AI can detect vulnerabilities and then design and employ patches at the speed required to counter the threats.
Popular cybersecurity companies such as Sparkcognition, Tenable and UK-based Darktrace all advertise their ability to leverage AI to counter and stay ahead of emerging cyber threats.
Darktrace shares dive after £6bn private equity takeover called off
Test your cyber response plans – before an incident
Developing a cyber response plan is a great first step, but organisations need to test the plan. This will allow the team to identify potential issues and refine the plan before a cyber incident. The test should include protecting the organisation’s most critical assets in case of an intrusion, including disconnecting high-impact parts of the network if necessary.
The NCSC recommends that senior management – including the C-Suite and board members – participate in a tabletop exercise to ensure familiarity with how your organisation will manage a major cyber incident, to not only your company but also companies within your supply chain. These key individuals need to understand the plan and their respective roles within it. Remember the old adage: ‘practice makes perfect.’
Leverage government resources
The NCSC has many resources to help organisations prepare for and avoid cyberattacks. But these organisations can also provide tremendous resources to respond to attacks. Assistant Secretary of US Homeland Security Rob Silvers was a keynote speaker at the 2022 Cybersecurity Summit hosted by the Association of Corporate Counsel (ACC) Foundation. He repeatedly emphasised the importance of leveraging government resources, particularly federal law enforcement, to respond to cyberattacks. Interestingly, the NCSC urges organisations to lower reporting thresholds and consult federal law enforcement regarding possible decryptors available, as security researchers have already broken the encryption algorithms for some ransomware variants. This should encourage companies to overcome their reluctance – perhaps driven by embarrassment or fear of bad publicity – to seek help from law enforcement. In our global economy, companies should leverage this valuable help.
Artificial intelligence is a serious threat in today’s cybersecurity setting, but it can also be used to help resolve cybersecurity concerns. Responsible organisations should leverage both AI and cybersecurity as part of their strategy to modernise and protect themselves. They should develop and rehearse response plans for potential cyberattacks. These daunting tasks are made much easier by assembling in-house experts and leveraging external cybersecurity resources, such as those provided by the NCSC.
Unfortunately, there is no foolproof method to avoid a cyber incident. But careful, proactive planning with a multidisciplinary team can go a long way to reduce the risk substantially.