The launch of ChatGPT in November 2022 has sparked imaginations, bringing with it new possibilities and challenges. Such has been the rapid onset of generative AI tools, that businesses and users alike are still grappling with how to use it correctly, ethically and safely.

While productivity gains are already being found, businesses face a challenge to set sensible usage rules and enforcing them. The gap between these two is ‘shadow AI’.

Shadow IT is the use of hardware or software within an organisation without the organisation’s knowledge or permission. For example, when cloud technology was introduced, employees began using cloud-based technologies before IT departments understood the benefits and limitations. This led to various issues such as data privacy concerns, data loss and security issues. 

Fast-forward to today, and there is a clear parallel to employees’ adoption of AI and GenAI before internal rules and industry-wide regulations are implemented. This Wild West mentality has resulted in shadow AI – an update to shadow IT whereby IT departments have no visibility or control over the applications and services employees use. 

A recent study from Cyberhaven indicates that between March 2023 and March 2024, the amount of corporate data employees put into AI tools increased by 485%. More than a quarter of that corporate data, or 27%, was considered sensitive, potentially putting the data at risk (causing more IT headaches). 

Sensitive data includes personally identifiable information (PII), financial data, intellectual property, business operations data, customer data, employee data, and legal documents. This shadow AI is difficult for IT departments to spot or track, because many employees use AI tools through personal accounts that IT departments don’t approve.

The risks of shadow AI

Businesses and employees have been quick to highlight the benefits of AI, including for content creation, data analysis, automating tasks and customer interactions. But the risk of unsanctioned AI and GenAI usage within an organisation carries significant risks. 

For example, tools leveraging GenAI may lack robust access controls and encryption, leaving sensitive data vulnerable to breaches and malware attacks. Furthermore, using unapproved AI technologies can lead to the misuse of personal data, violating privacy laws like the EU AI Act, GDPR, HIPAA, or PCI DSS and potentially incurring hefty fines or legal repercussions. Operationally, fragmented data management or poor resource allocation can breed inefficiency, resulting in data inconsistencies, inaccurate information, and duplicated efforts.

How to protect yourself from ‘shadow AI’

Action & education

The best way for businesses to protect themselves from the dangers of misuse of AI and the exposure of sensitive data is to be proactive: they must create policies and practices, provide IT-approved AI tools, train employees and develop a culture of compliance, responsibility and awareness. Recently the UK government introduced a framework for regulating AI based on the tenets of safety, security and robustness, transparency and explainability, fairness, accountability and governance, and contestability and redress. This leaves no grey space for the use of shadow AI.

A key way to achieve this is to implement educational campaigns to increase awareness of the risks of using unauthorised AI technologies. Awareness campaigns, training workshops, and information sessions where IT experts discuss the consequences of unapproved AI tools can tackle these issues, and should include employees from every level of the organisation. With AI legislation moving slowly, good AI practices are about developing culture. 

To mitigate these risks, organisations need clear and accessible AI policies and guidelines, effectively communicated to employees through various channels like e-learning modules, newsletters, and intranet posts. These materials should include tutorials and FAQs that specifically address the dangers of shadow AI, using real-world examples and case studies – perhaps even highlighting internal incidents where shadow AI caused problems – to drive the point home. Furthermore, upper management should actively endorse these policies and best practices, leading by example and fostering a culture of responsible AI usage.

Shadow AI is a new challenge for business leaders, but the fundamental best practices remain the same as tackling shadow IT. By taking proactive measures to ensure policies around AI usage are in place, employees will understand the issues associated with shadow AI, and the IT department will empower and protect the company and its employees to foster a culture of security and compliance.

How to build strategic technology alliances as a startup