The release of ChatGPT was a watershed moment: generative AI went from a relatively unknown term to being a well-understood tool positioned as a game-changer for almost every industry.
Unsurprisingly, GenAI topped Gartner’s Hype Cycle for Emerging Technologies in 2023, following research into and awareness of the technology skyrocketing. GenAI solutions were suddenly being developed for every industry and business, from more intelligent chatbots for customer service, to the launch of DrugGPT, a new AI tool that can help doctors prescribe medicines.
It’s fantastic to see innovative developments being launched, but, with all eyes on the potential of Large Language Models (LLM) and GenAI, data and technology leaders may be overlooking the potential applications of other, emerging forms of the technology.
Essentially, so far businesses have been looking at AI through a thin tube, and now the world has done the initial exploration, it’s time to widen our gaze to the whole of AI and its potential capabilities.
Taking off the AI blinkers
Once the GenAI blinkers have been removed, businesses can review the whole landscape, specifically other useful forms of AI.
For example, Causal AI, which is designed to identify and understand the cause and effect of relationships across data. Causal AI operates much more similarly to how humans think, going beyond correlations and pattern recognition to understand the ‘why’ and address the underlying causes to prescribe actionable steps. For example, it could analyse the data and identify the root cause of supply chain disruptions, such as machine breakdown, increased demand etc. So, rather than just predicting outcomes, this data can be used to optimise operations.
This is a vastly different approach from LLM which consumes a lot of data, learns the patterns and predicts the next pattern. This can make it hard to describe how a certain decision was made, especially if it was critical, and using these tools could be increasingly challenging in regulatory industries or those that have strict legislation.
In contrast, Causal AI models are inherently explainable due to the way in which they are constructed. This aspect of AI has real, practical applications, allowing data leaders to get to the route of a problem and really start to understand the correlation V causation issue. This can help identify the cause of a problem in an organisation, so the team can go straight to the source and fix it, without needing three or four different attempts.
Enhancing customer experience through emotional intelligence
Another example of an element of AI completely different to LLMs – but which provides a lot of opportunity for organisations – is emotive AI, which is a subset of AI that analyses, reacts to and simulates human emotions.
Emotive AI could be used in the health and care sector to allow people who are older, unwell, or generally in need of counselling or companionship to live more fulfilling lives. This will be a pretty linear development of the technology already on offer.
For example, AI personal assistants that already exist were partly designed to reduce feelings of loneliness and isolation, as well as to provide other services. Conversational AI tech is also already available, where rather than just answering questions, the AI can become a friend or companion who talks to you – out loud – via your computer or phone’s speakers. This can all be accelerated and evolved by integrating developing emotive AI technologies.
Emotive AI could be used WITH GenAI, and chatbots could be created that really understands the language people are using and how fast you’re typing etc, to then modify behaviour to suit the emotions of the individual. This could be useful, as the technology could recognise more easily if the person they are speaking to is upset.
For example, a chatbot for a debt management company could flag if the person calling is clearly distressed, and revise the usual course of action, for example, passing them onto a different team who is better equipped to help them.
It needs to be noted, that while this is a great business opportunity to increase efficiency and customer service, emotive AI shouldn’t be replacing humans. If someone is traumatised, the role of the AI needs to be, how quickly can I get this person to the right human to help them deal with it? It cannot be assumed that emotive AI will ever be suited to take on human roles when it comes to dealing with emotional issues.
Climbing the AI mountain
AI is exciting! But before businesses jump into it, it’s important to have a strategy. With that must come a recognition that there are multiple types of AI out there, some of which may be better suited to certain tasks than others.
To determine the best data and AI strategy, organisations need to know what problem they are trying to solve, where you are now, and where you want to go. It’s important that teams spend time getting their head around what AI is and if it is the right thing for their enterprise. The key is not to rush creating the right strategy, this means, focusing on the data that is fuelling the AI, and determining your purpose; what are you going to use the technology for? What are you hoping to get out of it? What opportunities are you trying to take advantage of?
I like to imagine AI as a giant mountain that we can stand on top of so we can see further. It isn’t about replacing the human decision-making process, it’s about asking: how do we just be us, but better?
I think the key to answering this question and understanding the true potential of AI, is taking into account and exploring the various versions, iterations and models of AI that exist and how these can be utilised.