Erring algorithms elicit harsher reactions compared to erring humans. A new study published in the journal Computers in Human Behavior has uncovered some interesting features of this cognitive bias popularly known as algorithm aversion.
More and more industries are automating jobs to minimize human error. Contrary to popular notions, however, algorithms also make mistakes. But, we normally hold them to different performance standards than human workers. This means our expectations regarding algorithms differ markedly from those regarding human workers. Humans are expected to make mistakes (“to err is human”), but algorithms are supposed to be perfect. How do these deeply rooted notions influence our reactions to erring algorithms vs. erring humans? And more importantly how do they affect hire-and-fire decisions?
In a new study by researchers Laetitia Renier, Marianne Schmid Mast, and Anely Bekbergenova, participants (N=880) read a story about a fictional victim “John”. In the story, John is erroneously rejected for a job or mortgage loan either by a human agent or an algorithm.
Gut reactions
To measure initial “gut” reactions, participants were asked to indicate to what extent they found the error by the decision maker acceptable. They were also asked to indicate the extent to which they felt negative emotions (i.e., anger, disgust, or hostility) towards the decision maker. All items were rated on a 5-point Likert scale (1 = strongly disagree to 5 = strongly agree). Participants found the error significantly less acceptable when made by an algorithm compared to when it was made by a human. They also reported feeling harsher negative emotions towards the algorithm.
“If an algorithm makes an error, this is unexpected and contrary to its very nature of being perfect, which is why the reactions are more negative.”
Justice cognition
To measure subsequent “justice cognition”, participants were asked to indicate to what extent they blamed or forgave the decision maker. Researchers found that participants blamed the human significantly more than they blamed the algorithm, but also forgave the human more than the algorithm. Moreover, participants thought that the human can be held accountable for the error, and significantly more so than the algorithm.
Behavioral intentions
Finally, to determine what course of action participants are likely to take to prevent the decision maker from erring in the future, they were asked how likely they were to:
- Improve or train the decision maker,
- Stop using or fire the decision maker,
- Do nothing, and
- Keep using the company that uses/employs the decision maker
Participants felt more strongly about improving the erring algorithm than training the erring human, for the same error with the same consequences. Participants thought that erring once was grounds to stop using the algorithm, significantly more so than firing the human. They were also more likely to say “nothing can be done” when the decision maker was a human. However, researchers found no significant difference with respect to how likely participants were to keep using the company, regardless of whether the error was made by an algorithm or a human.
“An erring algorithm does not just elicit a generalized negative reaction on all levels (instinctive, cognitive-moral, and behavioral), but a rather differentiated reaction pattern that is more in line with being perceived as non-human: negative gut reactions, no human-typical moral or justice cognitions, and a utilitarian, functional approach to behavioral intention.”
Read more about the study here.