The UK AI Safety Institute’s evaluations platform Inspect has been made available to the global AI community.
The UK government said the move paves the way for safe innovation of AI models.
After establishing the world’s first state-backed AI Safety Institute, the UK is continuing the drive towards greater global collaboration on AI safety evaluations.
By making Inspect available to the global community, the Institute is helping accelerate the work on AI safety evaluations being carried out across the globe, leading to better safety testing and the development of more secure models. This will allow for a consistent approach to AI safety evaluations around the world.
Inspect is a software library which enables testers – from start ups, academia and AI developers to international governments – to assess specific capabilities of individual models and then produce a score based on their results.
Inspect can be used to evaluate models in a range of areas, including their core knowledge, ability to reason, and autonomous capabilities. Released through an open source licence, it means Inspect it is now freely available for the AI community to use.
The platform is now available and marks the first time that an AI safety testing platform which has been spearheaded by a state-backed body has been released for wider use.
Sparked by some of the UK’s leading AI minds, its release comes at a crucial time in AI development, as more powerful models are expected to hit the market over the course of 2024, making the push for safe and responsible AI development more pressing than ever.
“As part of the constant drumbeat of UK leadership on AI safety, I have cleared the AI Safety Institute’s testing platform – called Inspect – to be open-sourced,” said Secretary of State for Science, Innovation, and Technology, Michelle Donelan.
“This puts UK ingenuity at the heart of the global effort to make AI safe, and cements our position as the world leader in this space.”
AI Safety Institute Chair Ian Hogarth (pictured) said: “I am proud that we are open-sourcing our Inspect platform.
“Successful collaboration on AI safety testing means having a shared, accessible approach to evaluations, and we hope Inspect can be a building block for AI Safety Institutes, research organisations, and academia.
“We have been inspired by some of the leading open source AI developers – most notably projects like GPT-NeoX, OLMo or Pythia which all have publicly available training data and OSI-licensed training and evaluation code, model weights, and partially trained checkpoints. This is our effort to contribute back.
“We hope to see the global AI community using Inspect to not only carry out their own model safety tests, but to help adapt and build upon the open source platform so we can produce high-quality evaluations across the board.”