Boardrooms across every industry are putting AI at the top of their agenda; exploring how they can capitalise, invest and unleash a new wave of productivity. Despite the buzz, risk mitigation is a prevailing concern as questions remain on governance and regulation. This is compounded by experts and influencers warning of the threat to human civilisation AI poses without the right guardrails.
Yet, no organisation can afford to ignore AI; alongside discussions around privacy, security risks, ethics and job displacement, the risk of bias must be explored before organisations even think about implementation. Of course, humans have biases and make mistakes, but the consequences are limited to the volume of work they do before the error is realised. AI could handle millions of transactions daily- an error in the source data is replicated a million times. This could pose a serious operational threat and an even greater reputational one.
Data quality is key
AI is a product of the data used to train it, and there can be challenges with rooting out bias for organisations working with unstructured, siloed data from a myriad of sources. Getting your house in order first, by leveraging the power of analytics technologies to unite, contextualise and derive insights from data, provides better quality fuel for AI tools.
But it’s not a case of simply using one type of technology to maximise the impact of another. The role of the human in the loop is critical, from creating unbiased algorithms, to identifying where source data may be inherently biased or inaccurate, and then ratifying AI-led- decision-making and outcomes.
When bias creeps in
Research was carried out in the US earlier this year into the civil rights implications of algorithms. It noted that in New York City, police stopped and frisked over five million people over the last decade. During that time, black and Latino people were nine times more likely to be stopped and searched than white people. The report highlighted that predictive policing algorithms trained on data from that jurisdiction would then overpredict criminality in predominantly black and Latino neighbourhoods.
At an organisational level, biased AI could have similarly damaging consequences and outcomes. For example, it could deprioritise the CVs of those from a non-white ethnic background, reinforce unfair practices in fraud detection or even perpetuate discriminatory stereotypes in healthcare.
Diversity is the enemy of bias
The more perspectives and lived experiences that are reflected within the AI and data science workforce, the better-equipped organisations will be to actively spot even the most implicit biases that could result in unfair, reputationally damaging outcomes.
Yet the World Economic Forum has found that 78% of the global AI workforce are male – and gender diversity is only one part of a multifaceted picture.
It is a delicate balancing act for businesses facing skills shortages. Why shouldn’t they hire applicants that bring the right experience and capabilities to the table? But if that talent pool is homogenous and lacking in diversity, it will have repercussions – particularly as AI becomes an even more dominant future force.
Inclusive by design
There are clear ethical obligations, but also economic ones to be considered. Research has found that companies in the top quartile for gender diversity have been 21% more likely to experience above-average profitability, and ethnic and cultural diversity is linked to a 33% increase in performance.
The industry and the education system need to tackle gender diversity as a priority. Only 9% of female graduates studied a STEM subject at University in 2018, and while there have been encouraging increases since, in some areas, such as mathematical science, the numbers have decreased.
Job advertisements must be more inclusive and avoid using typically male language, for example describing an organisation’s ‘dominance’ in the workplace would likely resonate with male candidates, whereas referring to ‘excellence’ is a more neutral term.
Once workforces become more diverse, progression must be supported and facilitated. A report by McKinsey in 2022 found that only 52 women for every 100 men get promoted to manager in tech, and that perceptions of progression opportunities are significantly limited for women of colour. This is a systemic issue which organisations have a moral obligation to address with urgency.
Human oversight and inclusion in the development and deployment of AI technologies is not a nice to have – it is essential. If that human oversight comes from a workforce where diversity is not embraced and addressed, then the industry could be falling at the first hurdle. While the ramifications may not be immediately felt, it could put the brakes on the positive potential of AI in the long term. Bias must be identified, and stamped out by a workforce which is truly representative of the world around us.