There is no disputing that 2023 was the year of artificial intelligence.
After ChatGPT and other large language models brought generative AI to the masses, a world of opportunities seemed to unlock for businesses. Some added LLM features to their platforms, while others used them to assist with their day-to-day operations – for example, generating copy and code.
Meanwhile governments scrambled to create safeguards, with an historic international agreement signed at the AI Safety Summit at Bletchley Park which recognised the significant risks posed by AI.
“In terms of governance and regulation, pretty much all countries across the world are involved with this at a governmental level, at an industrial level, and even at an academic level,” James Ramsden, data science capability lead at Salford-based Naimuri, tells BusinessCloud.
“Early attempts at tackling the dangers of AI have been fairly tentative, but I think in 2024 we will see this being much, much more clearly formulated with specific legislation – not necessarily separate regulation bodies, but certainly much more well-founded common regulatory practices.
“And external standards and international standards around AI will likely grow.”
Legitimate businesses are not alone in utilising the fledgling technology: threat actors have begun to leverage AI to launch more sophisticated attacks. Should we beware AI’s growing influence on the cyber threat landscape?
“Threat actors have started to utilise this technology to launch hyper-personalised social engineering attacks,” warns Paul Holland, CEO at Beyond Encryption. “With the help of AI, cyber criminals are able to harvest and analyse lots of data from various sources.
“This information can then be used to identify targets and analyse their communication patterns. Scammers can also leverage enhanced natural language processing programmes to mimic the writing style of an organisation or individual – which makes their attacks even more sophisticated.”
He says education is key to countering this threat: “Businesses must start educating their customers and employees while simultaneously investing in advanced AI-based cybersecurity tools. Leaders must consider how technological advancements, such as AI, will continue to impact the cyber threat landscape and respond accordingly.
“All strong cybersecurity strategies for 2024 will allow businesses to stay on top of not only current threats, but also developing threats as well. Any businesses that fail to do this will fall victim to the evolving cyber threat landscape and put both their team and customers at risk.”
Frank Krieger, CISO at Mambu, agrees that in 2024 we will see a rise in AI-powered cyber attacks, but equally an increase in ways AI can help to prevent them.
“We’ve already seen neobanks like Monzo start implementing AI technology to help their customers recognise in real-time if someone from Monzo is really calling them,” he says. “This in-app feature neutralises the risk of deep fake phone calls we’ve seen consumers previously fall victim to when being asked to give up sensitive information.
“In 2024 we can expect more tools utilising AI entering the market, and an increasing number of current players incorporating AI into their products, making them more secure and ultimately safer for the end user.”
Impact on legal sector
Just this month, it was reported that judges will be allowed to use ChatGPT to help write legal rulings – despite warnings that AI can invent cases that never happened.
The Judicial Office has issued official guidance to thousands of judges in England and Wales saying AI can be useful for summarising large amounts of text or in administrative tasks. However, it said that chatbots are a “poor way of conducting research” and are prone to making up fictitious cases or legal texts.
Ed Boal, head of legal at ShieldPay, told us that human intervention is extremely important in getting the most out of AI. “The legal sector’s role is to inform and challenge, as well as think about ethics and standards which together will help create a predictable, and therefore effective, legal framework,” he says.
“While AI can automate legal tasks such as processing simple agreements, maintaining the human touch is crucial as it adds value and is vital to understanding clients’ needs. AI can be used to take care of some of the more monotonous tasks, and surface knowledge in areas which aren’t especially context-specific, but society isn’t ready to completely abandon the human element – you only have to look at the pushback against high street banks closing branches when digital banking has been around for more than 20 years.”
Ramsden agrees. “I heard a legal professional say: ‘AI is not going to replace jobs in the legal profession; AI-empowered jobs will replace AI-agnostic jobs.’ That’s where the transition is going to be.”
Impact on PR and marketing
The PR and marketing industries have also begun to integrate AI into their operations. Jason Weekes, commercial director at CARMA, discussed how this technology is being used and the risks associated with it.
“AI will continue to develop at pace. From an analysis perspective, AI is increasingly valuable for PRs as a tool to crunch data but it is not yet able to understand content context or offer insights. That’s where the risk lies,” he says.
“With ChatGPT still making things up, media will have to pivot their content strategies with care and consideration to take advantage of the innovative technology whilst mitigating serious reputational consequences.”
Ramsden sounded a gloomy note here. “Human-authored content is being diluted rapidly by machine-authored content and AI-authored content. The really interesting problem is that it will get worse,” he warns.
“This is really important because of human biases around data: the way we tend to believe the first version we hear over subsequent versions and believe things that chime well with our viewpoint rather than credit things because they are well provided, cited, or argued.
“These powerful AI models will only become more powerful, and data on the internet will be dominated by machine-generated output.”
Impact on automotive industry
Ensuring that AI is used both safely and effectively has become more of a priority as the technology has advanced so quickly. One example of an AI advancement is ‘Embodied AI’, which Daniel Langkilde, CEO and founder of Kognic, predicts will make waves in 2024 – especially in the automotive space.
“We’ve seen the explosion of generative AI and LLMs this year whilst at times showing hugely impressive capabilities, it is by no means faultless. When producing marketing copy, a mistake may be something you can look over and for a legal document less so, but in the context of autonomous vehicles such mistakes are unacceptable in any circumstance,” he says.
“As AI moves into more applications and more complex scenarios the importance of AI alignment to ensure AI systems achieve intended and desired outcomes will be paramount over the next 12 months. As AI moves away from a purely virtual setting, as driven by GenAI, entering our physical world will be a real game-changer. We can expect AI in 2024 to bridge the gap between mind and matter – get ready for ‘Embodied AI’.
“Automated and autonomous driving is one of the first, major applications of Embodied AI. But that is just the beginning. Robots are still mostly confined to fulfilment centres and manufacturing plants but we are on the brink of seeing them entering human space.
“With Embodied AI set to become more common in 2024 – AI Alignment will become even more integral to ensure the safe and ethical development of AI systems.”
Impact on financial services
It cannot be disputed that AI has changed the way businesses operate forever, and the financial services industry was no stranger to this fact. The sector has expanded far beyond banks, and with emerging technologies like AI making waves in the industry, 2024 is set to be an exciting year for financial services.
“2024 is set to be a big bang for AI and machine learning. No industry will be left untouched. It will be the year of truly testing the extent of AI’s sophistication in the financial services rulebook,” said Ivo Gueorguiev, co-founder at Paynetics.
“Expect to see an increase in exploring different AI and machine learning models. Whether that’s levelling up chatbots, speeding up data analytics or AI-powered product recommendations. Not only that, but businesses will continue to invest in streamlining processes with AI. That said, while AI is set to be a huge focus for 2024, the effects of tech developments will be felt later in the decade.”
Stéphie Ndinga, chief compliance officer at Swan, says FinTechs will gain a heightened ability to monitor all facets of their business and assess risk simultaneously.
“Savvy FinTechs will harness AI in this way to integrate compliance into all workflows, which will require the development of departments devoted solely to the implementation of compliance across the product,” she adds. “Too often, organisations separate compliance from their end products, rather than viewing them as a single intertwined entity.
“Alongside this, regulators will also increase their use of AI in the coming year, tapping it to conduct more efficient analyses of companies’ reputations and risk.”