Reduce Human Biases: How Wendy Conducts Conversations With Fewer Human Errors
Aug 02, 2019
In the seventies, two cognitive psychologists (Amos Tversky and Daniel Kahneman) coined the term cognitive bias. There are many cognitive biases, and each of them describes a way in which the human brain, in a certain setting, makes judgements or decisions that are irrational. Cognitive biases are in-built errors in human thinking. They are endemic to the human mind; we all suffer from them, and it can take a great deal of willpower to resist or overcome them.
Cognitive biases pervade many areas of life, but they can especially influence how we interact with people, and how we conduct conversations. This extends to recruiting, which is founded on one-to-one interactions between people. In an ideal world, recruiters and candidates would enjoy a back-and-forth that was totally rational, that was simply a smooth exchange of the relevant information. But unfortunately, this isn’t how humans operate. Our thinking hardware is cluttered with biases that put us at risk of making irrational judgements, and of not processing the data in the right way.
This is where an AI recruiter like Wendy can come to the rescue. Wendy automates top-of-funnel tasks at scale. She takes information about an open role, and she engages a range of both passive and active candidates. During the engagement, she conducts conversations with candidates that are patient, warm, highly personalized — and far less affected by the cognitive biases that can mar human conversation.
Here are a few cognitive biases that afflict humans, but which Wendy is less susceptible to:
Anchoring. Humans are susceptible to letting their first judgement colour all subsequent judgements of something. (Hence the importance we place on “first impressions.”) A human recruiter might place too much weight on the opening portion of an exchange with a candidate, and not adequately utilize information that arrives later. But the beauty of a machine is they’re not prone to human judgements. They wouldn’t have their interpretation marred by the anchoring effect.
Confirmation Bias. Humans seek out and disproportionately value information that confirms their pre-existing beliefs. Once a human recruiter establishes an early opinion of a candidate, they might let this colour the rest of the exchange, rather than staying open and flexible. Wendy will always consider all of the data she receives rationally and dispassionately, throughout the whole encounter.
Framing Effect. Human judgement of a person or situation is heavily influenced by delivery and semantics. Subtle and subconscious cues can nudge us in directions that aren’t rational. But an AI like Wendy will only ever judge a person on the information they communicate, not unknown cues.
Halo Effect. Humans are prone to thinking that people they like, or find attractive, are skilled and capable in unrelated areas of life. Being a software robot, Wendy can’t and won’t like (or find attractive!) humans, and won’t let this muddy her assessment of them as a candidate.
Availability Heuristic. Human judgements are often influenced most heavily by the things that spring most easily to mind, whether or not those things are the most useful pieces of information. Recent, powerful or unique associations will be over prioritized in terms of their usefulness and relevance. This can’t happen to Wendy; she operates using data and algorithmic information-processing, not emotive linkages.
Humans are great. But our minds aren’t perfect. When it comes to recruiting, machine intelligences like Wendy can help us conduct recruiter-candidate conversations that are far less affected by harmful cognitive biases.