Haiper, a London-based generative AI video platform, has raised £10.8m ($13.8m) seed round led by Octopus Ventures.
This brings Haiper’s total funding to $19.2m (£15.1m) ahead of it raising a Series A round in 2024.
Haiper was founded in late 2021 by CEO Dr Yishu Miao (CEO) and CTO Dr Ziyu Wang, who both have PhDs in machine learning from Oxford University and former Researchers at DeepMind.
Dr Wang was a key contributor to DeepMind programs AlphaStar and AlphaGo and also served as a staff research scientist at Google Brain, while Dr Miao previously founded and led the London ML team building large language models (LLMs) for global trust and safety at TikTok.
The company’s co-founders have experience working alongside AI pioneers, including Geoffrey Hinton, principal scientist of DeepMind, and Phil Blunsom, currently chief scientist at Cohere.
Dr Yishu Miao: “Our end goal is to build an AGI (artificial general intelligence) with full perceptual abilities, which has boundless potential to assist with creativity.
“Our visual foundation model will be a leap forward in AI’s ability to deeply understand the physics of the world and replicate the essence of reality in the videos it generates.
“Such advancements lay the groundwork for AI that can understand, embrace, and enhance human storytelling.
“As the barrier between ideation and implementation lowers, Haiper’s powerful foundation model will give many in the industry the opportunity to develop stunning content and visualise their ideas in ways that were previously impossible.
“Our time in stealth was spent building up crucial distributed data processing and model training infrastructure, which we’re excited to use this funding to scale.”
Rebecca Hunt, partner at Octopus Ventures, said: “At Octopus, we look to back exceptionally talented founders who have a unique insight into a market and a strong technical edge.
“Only resilient, experienced teams can build cutting-edge products and solutions, and with Haiper this is no different.
“The deeply technical foundations of Haiper have enabled them to innovate and make breakthroughs at a pace we haven’t yet seen in the AI Video space; they are sure to become one of Europe’s big AI players. We look forward to continuing to back Yishu and Ziyu on their world-changing mission.”
Haiper has partnered with several top academic labs from the likes of the University of Oxford, University of Cambridge, University of British Columbia, Simon Fraser University and Fashion Innovation Agency at London College of Fashion.
An all-in-one visual foundation model for publishers, studios and individuals, Haiper enables everyone, even those without technical training and experience, to easily generate high-quality video content.
Recent developments in visual generative AI, from models like OpenAI’s Sora, have indicated the technology’s transformative potential.
For video-generative AI to reach the next stage, companies will need to scale models and the data behind them to better understand the physics of the world.
Each frame of a video carries an array of minute visual information, including light, motion, texture, and interactions between objects. From a splash of water to a linen shirt moving in the wind, generative AI that can intuitively understand and replicate the emotional and physical elements of reality will then be able to create content that is both visually stunning and true to life.
Haiper’s specialised team is training its perceptual foundation model with this particular aim in mind, with distributed data processing and model training infrastructure designed to be scaled up. With this, Haiper’s advancements in video content represent a step towards a form of artificial general intelligence (AGI), AI capable of internalising and reflecting human-like comprehension of the world.