Generative AI has caused a stir in the tech world and has become one of the most transformative technology inventions in modern history. This new technology represents a leap in AI capabilities, which has the potential to transform how we use data, how we work, and how we run businesses. However, the true consequences of the wider adoption of the technology are still poorly understood, which is creating significant risks.

Why we need to be cautious about generative AI

One of the fundamental issues with generative AI is the lack of ‘common sense’ or what AI researchers call ‘coherent mental models of everyday things’. This lack of real-life experience and understanding of the everyday world can cause significant flaws with the ‘reasoning’ of the AI algorithms resulting in AI malfunction or misuse for dangerous causes. This issue was flagged in an open letter from the Future of Life Institute earlier in April, including over 33,000 signatures from the science community. The signatories called for pausing the creation of more powerful generative AI models for six months so that experts and policymakers have time to examine the ramifications of the deployment of such models and get proper regulatory oversight. 

Without adequate regulation, generative AI can be misused in multiple ways: from creating misinformation that could destabilise society, to weaponising it to build weapons or making humans too dependent on AI.  The Centre for AI Safety has outlined these potential disaster scenarios in its guidance, as there is a growing realisation that generative AI will impact every industry and every job in some way. 

The European Parliament has taken  the first step towards regulating this space by formulating rules governing the safe and transparent use of generative AI. However, more needs to be done by all industry players to address the issue. 

Demystifying Tech: Harnessing AI within your business – with Ben Grubert

Inevitable CEO Ben Grubert will discuss the rise of AI, the evolving regulatory landscape and offer practical tips for harnessing ethical AI into your business in BusinessCloud’s Demystifying Tech podcast webinar. You can sign up at the link above

Responsible AI development 

To mitigate the risks, organisations need to enforce responsible AI practices and ensure they have a robust AI compliance framework in place. This includes controls for assessing the potential risk of generative AI use cases at the design stage and a means to embed responsible AI approaches throughout the business. 

For instance, emerging developments such as Constitutional AI have the potential to mitigate the harmful usage of generative AI. This approach involves drawing up a “constitution” on what proper behaviour is for a chatbot. If the chatbot generates a response that violates the constitutional principles, the response is revised until it is acceptable before it gets shared with the user. Start-ups like Anthropic are already pioneering this model and successfully using it to moderate AI behaviour.

3M’s Health Information Systems business has adopted an approach that puts guardrails in place, to ensure safe and effective use of Generative AI. The guardrails include having a human review of content before being presented to caregivers and always having verifiable explanations of the content generated. Such responsible practices need to be led from the top and disseminated to every part of the organisation.   

Regulatory oversight is key

However, responsible adoption of generative AI shouldn’t be a responsibility only for businesses. Governments and industry bodies should play a key role in enforcing ethical AI principles by establishing a clear regulatory framework and continuously evolving their understanding of the technology. Achieving this will require a balance between establishing strong AI policies and providing organisations with enough flexibility to be able to innovate and grow within the parameters of these policies.  

Generative AI is evolving rapidly, so it’s imperative that organisations and policymakers embed strong values, transparency, and integrity models into its development and governance. It’s also important for customers and other stakeholders to be aware of how generative AI and other types of AI use their data to drive decision-making. Therefore, educating the wider business community about the power of AI is an important step towards identifying and addressing ethical concerns. Ultimately AI ethics should sit at the heart of everything we do with generative AI, and we should never lose sight of the risks associated with misuse of the technology.

Embracing AI & ChatGPT in recruitment