The application of artificial intelligence is providing businesses with opportunities to improve sales and customer interaction, especially through voice analysis.
However, this new technology is raising questions concerning customer protection and biases. Artificial intelligence is still a fast-evolving technology, making it vital that businesses are aware of how AI systems work, the risk of existing bias, and how to adhere to regulatory guidance.
Nevertheless, AI voice profiling and analysis holds significant promise, providing businesses with countless benefits through systems such as customer profiling and analysis.
What is AI voice profiling, and how does it work?
AI voice profiling involves the combined use of processes such as conversational AI, natural language processing and automatic speech recognition to detect speech characteristics such as tone of voice, mood and attitude.
This AI can effectively analyse voice-based data, enable machine learning systems and break down customer interactions to better understand and personalise customer needs. Through this technology, customers contacting call centres can be connected to call handlers who are most able to address their individual needs and can provide the most suitable style of interaction.
Their unique characteristics can be recorded into a comprehensive profile to personalise future interactions. This helps companies to create more genuine connections with their customers, encouraging successful sales and positive customer experiences. The application of AI also allows for data to be utilised to its full potential, regardless of its format, through machine learning (the use of constantly evolving algorithms that learn from recorded history to improve their function).
In sales, the use of detailed customer profiles can bring significant benefits, allowing for targeted advertising on a much more specific level. Customers can receive advice and recommendations tailored specifically towards them, based on data that has been collected through voice recognition AI.
What are the risks associated with AI profiling?
Voice profiling through AI is still a relatively new process, meaning that it has yet to be perfected. Customer information must be correctly updated, for example, establishing whether there are multiple users of one device so profiles are not compiled using data from more than one individual.
Companies must also be aware of the risks associated with attempts to link physiological traits to voice analysis – racial profiling has been detected as an issue in recent attempts to connect voice recognition software to physical features such as weight, height, race, gender or health status.
The software used for voice recognition has also demonstrated significant race and gender biases, performing noticeably more inaccurately for individuals who are not white, and for women. These biases are never intentionally coded into voice recognition software but are reflective of the underlying data used to train and assess the efficacy of the models. Erroneous data due to the nuances of diverse vocal features can result in skewed data collection, leading to incorrect profiling.
How will the rise in voice recording affect businesses and customers?
Increased use of voice recording and analysis software – particularly those trained using AI – will assist businesses in adapting to customer needs and personalising the customer experience. However, they will need to be aware of the regulations and legal issues surrounding consent to record both customers and employees, and ensuring for example that employees explicitly consent to the use of recording software (including the provision of training and signed consent forms), or that proper notices are provided to customers.
GDPR in Europe (and increasing regulation around the world) places strict limits on what data can be held about customers, how it can be used, and in an increasing threat to US businesses operating in Europe, where it can be stored and processed.
This should not be seen as a brake on innovation, rather a clear message to the industry that the ‘black box’ needs to be less opaque than it is now so that customers know that they are being treated fairly whether they interact with a person or a machine.