How an AI Recruiting Partner Can Avoid Unconscious Bias
Aug 23, 2019
AI-driven automation is transforming recruitment. The first results of this transformation haven’t always been positive. You might be surprised at the idea that automation can help combat bias in recruitment, considering the high-profile 2018 case of Amazon. Amazon tested a recruitment algorithm, and found that the algorithm quickly taught itself that male candidates were preferable to female ones.
However, the case of Amazon isn’t evidence that AI will entrench biased hiring practices. All it shows is that AIs can only work with the datasets they have — and that if you give them flawed datasets, you’ll get flawed results. Amazon’s algorithm based its actions on the company’s ten-year hiring patterns, which have been heavily male-centric. Of course the AI concluded that the right approach was a sexist one!
In fact, prepared in the right way, and fed the correct data, an AI recruiting partner can actively combat bias. Why? Because bias is, sadly, a very human trait.
As we discussed in a previous post, humans are vulnerable to a whole range of “cognitive biases.” Some of these in-built errors in human thinking make unbiased hiring a major challenge. Confirmation bias makes us disproportionately value information that confirms our pre-existing beliefs. The similarity attraction effect makes us approve of people who are similar to ourselves. These mental tendencies are one reason why you often see recruiters reaching out to networks or alumni that are familiar to them, and why referrals remain a major source of hires across the industry.
Our cognitive flaws combine to create what psychologists call unconscious bias. As Forbes describes it: “We gather millions of bits of information and our brain processes that information in a certain way — unconsciously categorizing and formatting it into familiar patterns. Though most of us have difficulty accepting or acknowledging it, we all do it. Gender, ethnicity, disability, sexuality, body size, profession etc., all influence the assessments that we make of people and form the basis of our relationship with others, and the world at large.”
Evidence of this sort of unconscious bias is rife. In one Canadian study, researchers found that applicants with Chinese, Indian or Pakistani names were 40% less likely to get an interview call-back than applicants with European names. These sorts of resume studies have been repeated elsewhere. It is highly unlikely that the people analyzing these applications were all consciously prejudiced. Still, the irrational, unconscious nature of human thinking still makes them act in prejudiced ways.
In short, the nature of human cognition makes it very difficult for us to be truly objective in a hiring scenario. Which means that marginalized groups often don’t receive fair treatment from human recruiters.
But an AI? An AI is far less burdened with cognitive biases. Programmed in the right way, and AI can guide the recruiting process in a consistent, unbiased way.
At the very beginning, an AI recruiting assistant can scan for gendered language and make sure all job ads are gender-neutral. Then, an AI can assess applicants using a system of data points that is free from any subjective judgement. It can simply be taught what data points are relevant, and how they are to be taken into account. An AI can perform an analysis that is totally free of assumptions, personal judgements, or self-interest. It can simply take the raw details of what it should look for — qualifications, experience, etc. — and carry out the search in an emotionless, data-processing fashion.
Furthermore, an AI can be programmed to specifically ignore demographic information, or factors like name or education details (exactly the things that trip humans up). Equally, an AI can be programmed to value data points that are pro-diversity. It could specifically hunt down various forms of cognitive diversity, or prioritize skills and experience over intangibles like verbal style or accent. With perfect data processing, an AI can make sure that set quotas for candidate submission are being met at all times.
The case of Amazon’s sexist algorithm shouldn’t be forgotten. If there is already bias in your process, simply setting an AI loose in your funnel will intensify that bias. For this reason, an AI recruiting assistant will always need human oversight, and should never be constructed as a black box that can’t be analyzed or optimized.
But calibrated right, an AI can act and analyse in a way that is far more rational and far less biased than the human mind. It can assess resumes using data, not subjective judgement. It can interview candidates in a setting that is pure fact-finding, without any interpersonal clouding.
Automation is here to stay. It’s up to the humans working in recruitment to make sure that it combats, rather than entrenches, injustice and bias.