Long gone are the days of paper CVs spread out across a hiring manager’s desk. 

Now, not only are applications digital, at many stages the HR team is too. AI is being increasingly integrated into recruitment and the world’s biggest brands – such as McDonald’s, Unilever and JP Morgan – use AI in their applicant selection.

The appeal is obvious. The ability to screen and sort CVs in seconds allows companies to broaden the scope of their hiring search and access talent they might have missed. 

Added to this, using a machine instead of a human should theoretically eradicate the bias that creeps in the minute humans start making decisions. AI is the ethical, efficient hiring tool we’ve all been waiting for.

Except it isn’t always.

Whilst the AI might not be biased, the data it trains on and learns from often is. Across other AI applications, we’ve seen bias make tech ugly: facial recognition technologies put black people at a higher risk of misidentification, for example. It’s a mistake to think this algorithm bias disappears in the hiring process. If it’s not specifically built using de-biased data sets, it can simply replicate the existing bias and discrimination found in the world around us.

As the world wakes up to the ways in which discrimination is experienced and perpetuated, many companies are looking to de-bias the way they hire. But AI solutions are only part of the solution if we take proactive steps to stop them being a problem. 

Here’s how businesses can harness AI ethically and effectively in order to truly de-bias recruitment.

The problems with AI in hiring

AI isn’t inherently free from bias: an AI application is only as good as the data it’s trained on. And its raison d’être is that it learns, so if the data is biased, it will learn to become biased – just like a human.

For example, in 2015 Amazon, an early mover in AI in recruitment, realised its tech for rating candidates for software developer roles and technical positions had developed a gender bias. Its AI had been trained on patterns in CVs submitted to the company over a 10-year period and most came from men. 

What this meant is that the AI learned to prefer male candidates, to the extent that it downgraded CVs that included the word ‘women’s’ and looked negatively at candidates who had graduated from two all-women’s colleges. Amazon edited the software to correct these specific instances but as there was no telling if the AI had found or would find new forms of bias, the tool was eventually scrapped.

 

The only way to avoid this is to build AI that’s fit for purpose. At Applied, we’ve built our technology on the world’s first data set which can create ethical AI. All ethical tech solutions must begin with this commitment to de-biased data. 

AI also cannot undo unequal social dynamics. Many companies assume using AI in the hiring process will somehow wipe out centuries of unequal power dynamics. This isn’t the case. The experience a candidate has, the way they write about their skills and how they approach a job is inextricably linked to aspects of their identity or lived experience such as gender, ethnicity, age or economic background. Simply removing a name from a CV won’t change this.

For example, candidates from certain backgrounds have been seen to include skills on their CVs at a lower degree of proficiency than other candidates. And looking to graduation from or experience in certain institutions as a marker of talent means inheriting the bias of their admissions process and the discrimination embedded within them.

Even if AI isn’t actively pursuing bias it won’t necessarily remove it, just make it invisible.

And as software is shrouded in secrecy, many AI recruitment companies aren’t transparent about their methodology which means their processes, and the biases they may contain, are difficult or impossible to scrutinise. 

Meet the VC backing underrepresented founders – and looking for more

HR teams are often unclear as to what, specifically, they’re signing up for and continue to operate on the false assumption that anything tech-based is socially neutral.

The lack of structured external, third-party checking means if a company doesn’t properly bias-test their tools themselves, discriminatory AI can be released into the marketplace and start doing real damage.

So how can you hire better?

Ethical recruitment isn’t a mission we can afford to abandon. And tech does have a role to play in making hiring fairer, although not in the simplistic way we’ve recently seen. 

First of all, you must think beyond CVs as it’s time to let go of traditional hiring models that favour candidates with certain privileges.

CVs rely on flawed signifiers of talent – such as education, or years of experience – that are steeped in bias and are poor indicators of a candidate’s true ability to do a job. Businesses who want to make their hiring ethical need to assess the aptitude of candidates in ways which don’t rely on discriminatory ‘gut instincts’.

 

One method is to use skills-based questions as part of short-listing. This means the human – or AI brain – isn’t fooled by traditional, mistaken ideas of what ‘good’ looks like. Businesses should also be applying a skills-based methodology from the very start of the hiring process, right from when they write the job description. 

The language used is crucial and this helps make hiring more inclusive, so the best candidates aren’t silently filtered out before their application even makes it through the door.

Next, by upending order: while order helps us make sense of the world around us, it also introduces bias. Making recruitment more ethical means using technology to scramble both.

For example, software can be used to break up an application into pieces to help prevent confirmation bias. If a recruiter has both a candidate’s CV and assessment questions in front of them, they will subconsciously be making links between the two that reaffirm the beliefs they bring to the table.

For instance, if a recruiter knows a candidate went to a prestigious university, socially held as a marker of intelligence, they may think they have written a higher quality answer. Anonymising and breaking up an application disrupts this kind of confirmation bias so recruiters can assess answers more neutrally.

Disrupting order in another way is also important. ‘Ordering effects’ come into play in hiring: when we make several judgements one after the other, we tend to make kinder judgements at the beginning, which progressively become harsher. In recruitment, this means candidates who are reviewed last have a lower chance of being successful simply due to the luck of the draw.

However, tech tools can help correct this by presenting each member of a hiring team with candidate answers in a different order. This helps take ordering effects out of the equation so candidates aren’t penalised for factors outside their control

Finally, you must recognise that bias training alone isn’t enough. It can be useful if handled properly and not treated as a one-off, afternoon workshop that can be checked off as ‘done’; however, the positive impacts of training to counter implicit bias have been proven to be temporary, typically fading away within eight weeks. And training to counter explicit bias can actually make things worse. 

Instead of leaning on this spurious avenue, companies must actively de-bias the tools hiring managers use to make their decisions. Only by removing the scope for their own bias to creep in can we level the playing field. 

Look at the tools you’re using for your hiring processes – has the data they are learning from been actively de-biased? Have your models been assessed for fairness? These foundational building blocks have to be water tight when it comes to de-biasing the hiring processes. 

If there are cracks, they’ll quickly be filled with multiple streams of conscious and unconscious bias – quickly undoing all your good intentions and baking in new forms of discrimination instead of removing them.