If you use ChatGPT or other artificial intelligence tools for work-related tasks, does your boss know? If you answered ‘no’ to the second question, you’re not alone.

The potential impact of AI on our lives is one of the most discussed and debated topics of our time.  Since November 2022, and the launch of ChatGPT, the development of AI tools for business and personal use has accelerated.

Companies are looking at the resulting wave of new and emerging AI services to determine what can help cut costs, for example through automating more arduous tasks. 

With more and more AI tools and applications coming to market, businesses need to move quickly to ensure they have the policies and procedures in place to benefit from this technology while minimising any risks to their businesses. 

Shadow AI

Unsanctioned use of AI tools at work — shadow AI — is a growing issue. Essentially unauthorised technology implemented without any controls in place, shadowAI could pose security threats through potential data breaches, impact the quality or consistency of work delivered, introduce inconsistencies in operations and even violate industry regulations.

To address this challenge, company rules and procedures need to keep up with the rise of AI and employees need to be educated about what is and is not permissible. Key to this is the development and communication of clear policies and guidelines concerning the use of AI within business operations. This starts with an AI Use Policy.

An AI use policy is designed to ensure that any AI technology used by your business is done so in a safe, reliable and appropriate manner that minimises risks. It should be developed to inform and guide your employees on how AI can be used within your business.

It would be impractical to list all potential rules for using AI in the workplace here, but there are a few bases that any AI use policy must cover. 

Purpose and scope

In the AI use policy’s introduction and purpose section(s), it is always helpful to set the scene. 

Define the overall context, purpose and scope of the policy, including which staff and tasks it applies to. 

Are there any related company policies that could be referenced?

Approval process

List any pre-approved AI tools (e.g. OpenAI’s ChatGPT, Google Gemini) and consider including any tools based on those, such as Microsoft Edge’s Copilot, which is powered by ChatGPT. 

What is the process for approving other or new AI tools? Consider setting out the relevant evaluation criteria in the policy. For example, a high-level minimum standard such as ‘the AI tool should be legally compliant, transparent, accountable, trustworthy, safe, secure and ethical’. 

Other things to consider include evaluating vendors, reviewing terms and conditions and conducting a risk-benefit analysis.

Government funds ‘GenAI for teachers’ project

Rules of use

Perhaps the most important part for the majority of your employees, set specific do’s and don’ts for inputs and outputs. This is to ensure compliance with data security, privacy and ethical standards. 

For example, ‘Don’t input any company confidential, commercially sensitive or proprietary information’, ‘Don’t use AI tools in a way that could inadvertently perpetuate or reinforce bias’ and ‘Don’t input any customer or co-worker’s personal data’.

For outputs, guidance can reiterate to staff the potential for misinformation or ‘hallucinations’ generated by AI. Consider rules such as ‘Clearly label any AI generated content’, ‘Don’t share any output without careful fact-checking’ or ‘Make sure that a human has the final decision when using AI to help make a decision which could impact any living person (for example, employees/applicants, or customers)’.

Developing an AI use policy will help mitigate the risks of shadow AI, ensuring your business can benefit from the rich rewards of AI while remaining suitably protected and operating within legal and regulatory boundaries.

AI-focused CTO unveiled at Manchester’s ANS