Partner content

Businesses across the UK are adopting AI tools to improve productivity, optimise efficiency, and lower costs. But not all AI tools are made alike, and some are more trustworthy than others.

According to Microsoft research, 71% of employees in the UK have used unapproved consumer AI tools in the workplace. Worse still, more than half of those do so each week, and only one-third felt concerned about data privacy.

Using unapproved AI tools, known as ‘Shadow AI’, can expose businesses to risks, such as critical data leaks, confidentiality breaches, compliance failures, and network infections. However, creating and enforcing an official AI use policy will help to keep your business safe.

What Should You Include in Your Business’s AI Use Policy?

Your AI use policy must be relevant to your business and your employees’ needs.

Approved and Banned Tools

Determine the tools that employees can and cannot use at work. For teams new to AI tools, stick to the most popular options (e.g., Google Gemini or Microsoft Copilot).

Large companies have more to lose if their AI products cause issues and controversy. As such, major AI tools typically include audit logs and data processing agreements.  If a tool doesn’t tell you where your data goes, that is a strong reason to keep it off your approved list.

Suitable Use Cases

Specify when employees should and shouldn’t use AI tools. You may not want AI creating lead-generation emails or investor pitches. But you might be happy for employees to draft post-meeting summaries or document outlines instead.

A useful way to frame this for your team is to ask whether the output could cause harm if it were wrong. If the answer is yes, AI is a starting point at best.

Data Privacy and Protection

Employees must never use sensitive data when working with AI tools. Prohibit the use of all information related to IP, client records, financial activity, or other business-critical areas.

Ethical Use and Fact Checking

Employees should always check AI-generated content before publication to identify inaccurate or dangerous wording.

AI may produce text that shows a bias or uses discriminatory language. One study found that certain AI tools used in healthcare appeared to downplay women’s needs more often than men’s.

Education and Management

Train the employees to use AI tools safely and effectively. Your AI use policy should include guidance on handling tools or point employees to reliable resources.

How to Stay Safe with AI Tools

Implementing an AI policy will lay the foundation for responsible use in your workplace. But you can’t simply create a policy and forget about it, it’s important to actively manage AI use.

Approve AI Tools with Care

Set up an official routine for choosing and approving AI tools. Use a step-by-step process for researching, testing, and introducing software on a trial basis.

Check AI Tool Usage Regularly

Review employees’ use of AI tools at scheduled times (e.g., once per quarter). Make sure that your team is following your policy, complying with privacy regulations, and not overrelying on AI.

Use Security Software

Do some or all of your employees work remotely? Accessing AI tools over unsecured connections, such as their local coffee shop’s Wi-Fi network, can be dangerous.

The best VPN app for remote teams encrypts traffic and masks your team’s IP addresses, keeping business data private on any network. For example, NordVPN uses AES-256 encryption and the NordLynx protocol (built on WireGuard®) across servers in 111 countries.

Additionally, installing reliable security software will help your employees use AI tools safely. Security applications will monitor system activities for potential threats and vulnerabilities.

Check Each AI Tool’s Privacy Features

Before you integrate an AI tool into your processes, review its privacy features. You should be aware of how information is used and stored, and what measures you can take to avoid leaks.

Getting Started Fast: AI Use Policy Checklist

You can still reduce employee reliance on shadow AI and prevent data leakage before you have an official policy.

  • Audit: Find out what AI tools your employees currently use. To inspire honesty, focus on transparency and zero judgement.
  • Prohibit: Tell employees which tools they can’t use and why. Clarify the repercussions of flouting the rules.
  • Verify: If you want to use AI tools before your policy is in place, choose one or two with solid reviews. Check them carefully to verify their reliability.
  • Manage: Only employees who need to use AI tools should have access to them, at least initially.
  • Learn: Talk to department heads or all team members to assess how AI tools can make their work efficient. Use their feedback to build a relevant and fair policy.

Putting Your AI Use Policy into Action

Creating your AI use policy can transform how your employees work. The best AI software can help teams work smarter and achieve more in less time, so research AI tools in detail to determine which options will suit your business best. The integration process may take time, but a clear policy will make for a smoother, safer transition.