AI’s takeover is well underway, with a recent survey showing that half (46%) of all knowledge workers use personal AI tools in their roles. For some it’s because their IT team doesn’t offer AI tools, others said they wanted their own choice of tools, but for many, it’s because their company has banned external AI.
This isn’t uncommon. Just recently, a UK law firm blocked general access to AI tools because “much of the usage was not in line with its AI policy”.
Modern AI covers a wide range of tools and uses, many of which are low risk and do not need strict policing in a one-size-fits-all policy approach.
AI tools in the workplace can be categorised based on the level of security control required. Generally, tools handling sensitive, proprietary, or regulated data require strict internal security controls, while those used for general productivity or non-sensitive tasks may need lighter oversight.
Unnuanced clamping down on AI as an employer can trigger significant risks, including a breakdown in trust between employees, restricted productivity and falling behind industry competitors. It could also lead to a workforce under-equipped to meet skill demands, with a recent survey showing a lack of confidence, with just 33% of UK business leaders feeling confident in their organisation’s level of AI proficiency, compared with countries such as India (49%) and France (55%).
To combat potential skill shortages and ensure AI is implemented successfully, there needs to be an onus placed on establishing ‘what’ AI is used for, ‘why’ and ‘who’ is responsible for its creation and delivery.
AI’s industry takeover
Industries are leveraging AI across a variety of functions, with implementation of AI in business typically falling into four distinct categories, each with its own considerations and impact.
The first category involves AI as a finishing tool for human endeavours. Businesses are increasingly using AI to assist in researching, drafting, refining, and proofreading communications and deliverables, which allows employees to focus on higher-value work.
Secondly, using AI to handle routine processes such as document processing or data analysis enables businesses to reduce human error, and free up valuable human resources. For example, in healthcare, AI is being used to collect and segment clinical trial data and patient records, which provides insights for personalised treatment plans.
The third category involves employee management applications. AI tools are being deployed to optimise workforce allocation, evaluate performance, monitor time and attendance, and match employees to projects based on skills and availability. These systems offer the promise of more objective, data-driven management decisions.
The fourth and perhaps most ambitious application is the creation of entirely new products or services powered by AI. Businesses are leveraging machine learning, natural language processing, and predictive analytics to develop innovative offerings that would be impossible without artificial intelligence capabilities.
From decision to deployment
Despite its transformative potential, successful AI implementation requires careful consideration of governance, ethics and practicalities. The question of who should be responsible for AI policy is paramount and varies based on the specific use case.
For AI used to assist in deliverable production, responsibility should lie with a cross-functional business strategy group. While the CTO or CIO has an important voice, technology leaders shouldn’t be the only decision-makers. As demonstrated by law firm Hill Dickinson’s approach, delivery, technology and data protection functions need to reach a unified position to balance competitive advantage with the protection of valuable intellectual property.
When AI is deployed to automate tasks involving personal information, policy should be co-owned by C-suite executives and the organisation’s Data Privacy Officer, alongside the cross-functional strategy team. Businesses must establish a legitimate basis for processing data with AI and implement safeguards against the weaknesses inherent in automated decision-making.
For workforce management applications, employee representation becomes crucial. AI policy in this domain should be consultative, with impacts transparently communicated to the workforce – this is crucial for maintaining workforce culture.
When developing new AI-powered products or services, the technical expertise of the CIO or CTO must be balanced with strong involvement from compliance and safety officers. This includes data protection officers and legal advisors who can ensure that AI products comply with industry regulations and broader legal frameworks.
Beyond governance considerations, businesses must justify their decision to use AI based on outcomes and uses rather than ‘doing it because everyone else is’.
The selection of appropriate tools is equally important. Organisations should conduct a thorough cost-benefit analysis that considers not just potential efficiencies but also new risks that AI tools introduce.
Are LLMs any good at coding? We’re halfway through the revolution
AI tool security requirements
When selecting the best tools for an organisation, it’s important to also carefully categorise AI tools based on security requirements.
For example, tools handling sensitive data such as financial records, customer information, or trade secrets demand rigorous security protocols. AI systems used in data analysis, business intelligence, and content creation require comprehensive safeguards including data encryption, strictly defined and applied access controls, content validation, and continuous monitoring for potential bias or information leaks.
Additionally, cybersecurity AI tools, which protect against threats and analyse internal security logs, necessitate strict access controls and integration with security operations centres. Similarly, AI applications in human resources, legal workflows, and financial risk management must adhere to stringent privacy laws, regulatory guidelines, and ethical AI principles to prevent unauthorised data access and ensure compliance.
On the other hand, some AI tools require lighter security controls due to their less sensitive nature. Productivity tools like that summarise text or grammar correction services, internal knowledge management systems, and code assistance platforms typically need moderate security measures.
These tools, while valuable for enhancing organisational efficiency, pose lower risks compared to systems dealing with personal identifiable information (PII) or critical business decisions.
However, even these lower-risk AI tools should implement basic access controls and ethical considerations to prevent potential vulnerabilities. Therefore, organisations should still conduct thorough risk assessments, understanding that the security approach should be tailored to the specific use case and the potential impact of each AI tool within their company.
Beyond the bots – cultivating an AI-ready culture
Integrating AI tools demands thoughtful assessment of their impact on existing cultural frameworks. Business leaders should ensure their strategies are created to address both current and future organisational needs by aligning with core mission, vision, and values.
To do this, it is essential that business leaders adopt a flexible leadership style that balances immediate challenges with future opportunities. Leaders must also redirect hiring and training from securing the skills AI is replacing, toward investing in the skills that can validate AI outputs and increase their value.
Empowering employees and fostering a culture of innovation and accountability are crucial steps in this process. Additionally, promoting a culture of continuous learning and development helps keep the organisation agile and adaptable.
Just as companies must justify their AI implementation decisions internally, they should clearly communicate these reasons to employees. Teams need comprehensive understanding of what changes are occurring, the rationale behind them and their beneficial outcomes, as well as employees to ensure successful adoption.
Final thoughts
The UK government has set out an ambitious roadmap for harnessing AI across industries, and employees are already leveraging personal AI tools in the workplace and reaping the results.
Ultimately, businesses should be helping to lead this takeover and implementing AI in a sustainable and strategic way.
While it’s not as simple as implementing the most cutting-edge AI solutions, business leaders can start by establishing specific business needs and leveraging the right technology.
Finally, maintaining cultural integrity and ethical standards – and striking while the iron is hot – will be crucial to harnessing the transformation AI can and will have.