AudioTelligence has raised £6.5 million in Series A funding.
The Cambridge company seeks to improve the accuracy of modern speech recognition systems used in smart speakers and other voice-activated technologies.
Its tech acts like ‘autofocus for sound’, using data-driven ‘blind audio signal separation’ to focus on the source of interest, allowing it to be separated from interfering noises.
This enables microphones to focus on what users are saying, improving the audio quality for listeners, regardless of background noise.
The round was led by Octopus Ventures, with participation from existing investors Cambridge Innovation Capital, Cambridge Enterprise and CEDAR Audio.
The investment will support AudioTelligence’s plans to triple employee headcount over the next three years.
CEO and founder Ken Roberts said: “Voice command systems work reasonably well when the audio scene is quiet, but performance deteriorates rapidly once you have multiple people talking or when there’s background music.
“The number of applications where our technology is needed is enormous and still growing every day.
“We’ve already seen some great results from real-world testing and this investment will fund further product development to ensure we can all communicate clearly with the next generation of smart consumer devices and each other.
“Our solution doesn’t need calibrating or training, and the code is production ready – which means existing devices can be easily upgraded to AudioTelligence with no more than a software update.”
The latest investment round follows £3.1m seed funding in 2018 from Cambridge Innovation Capital and Cambridge Enterprise. The company was founded in 2017 as a spin-out from University of Cambridge-founded CEDAR Audio.
Zoe Chambers, Early Stage Investor at Octopus Ventures, commented: “In today’s hyper-connected world voice activated technologies are becoming increasingly prevalent, a trend we expect to continue. That’s why AudioTelligence’s technology is so exciting, as it drastically improves the accuracy and user experience of human-machine interactions.
“We believe it has the potential to shape the future of sub-sectors such as smart assistants, VoIP and even the mobile phone itself, in a world where the device is no longer held to our ear, but at arms’ length to our faces.
“We are thrilled to be adding AudioTelligence to our growing Deep Tech portfolio and look forward to supporting the team on their growth journey as they continue to provide innovative solutions for their customers.”