Photograph
Daniel Kluver

about

blog

research

CV


contact

email

twitter

Algorithm Aversion

Algorithm Aversion

This post is a short review of the paper "Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err" by Berkeley J Dietvorst, Joseph P. Simmons, and Cade Massey. I found this paper while searching for information about how people interact with, and choose to use, websites or other systems that are centered around an intelligent algorithm as a core feature. This paper was a little out of field for that search, but the title caught my eye so I decided to give it a read.

Algorithm aversion is the concept that people place less trust in algorithms than humans, and avoid using algorithms even in domain where algorithms outperform humans. Before this paper I hadn't heard of algorithm aversion. That said, I also don't find this a hard idea to understand given my background and training in algorithm development. This paper may not invent the term, but I feel that it does an amazing job of beginning to understand what causes algorithm aversion.

Summary

This paper looks into the algorithm aversion phenomenon, trying to understand what actually causes algorithm aversion. This phenomenon has been observed in past work, but only anecdotal explanations have been provided to explain why it happens. This paper seeks to form an evidence based understanding of which factors contribute to this bias against algorithms by performing five carefully controlled expediences based on one central methodology. First people are introduced to one of two tasks: predicting the performance of MBA students, or predicting the number of airline passengers from different states. Subjects are then trained by a series of example data points. Depending on the condition subjects will also see an algorithm's prediction, another person's prediction, and/or be asked to make their own prediction. After making and/or receiving predictions the subjects are shown the actual outcome for this data, allowing them to judge how good the human and algorithm are at this task. They are then told to choose either the human or algorithm for a series of paid trails. The subjects are paid 1$ for each trial their chosen predictor gets close enough.

Using this methodology this work was able to isolate exposure to algorithm predictions (but not human predictions) as a significant factor in algorithm aversion. If the user hadn't seen the algorithm in action during the "training" part of the experiment, around 60 - 70% of subject chose the algorithm, however, if the subjects are shown algorithm predictions then they become much less likely to take the algorithm's predictions (down to 20-40%). Exposure to human predictions didn't seem to have a large effect. From this, the paper concludes that seeing an algorithm fail causes algorithm aversion. This is contrary to seeing humans fail,which didn't seem to cause a significant human aversion effect. This result holds across both domains investigated, is stronger if the task seems easier, and remains even when the subject isn't asked to use their own predictions.

If you want to understand this in more depth I recommend reading the paper. The paper also contains interesting results looking into the motivations and beliefs of their subjects about the relative performance of the algorithm and human predictors.

My thoughts

This work is very interesting. While the existence of algorithm avoidance feels relatively obvious given my background, the cause of algorithm avoidance is much less clear. Therefore, I feel the authors are doing important work looking into this phenomenon. As the paper points out: algorithms can already beat humans at some tasks, and the number of such tasks is only going to grow with time. As algorithms become a more dominant way to predict future decisions we will need to tackle this aversion problem or else society will not be able to fully benefit from advances in machine learning and artificial intelligence.

I liked their methodology, it led to a very simple experimental design that seemed easy to analyze and understand. I also liked that this paper reports on multiple instances of this experiment with various small changes, this suggests that the result is general and robust (at least across two domains). By doing this I feel the authors were able to conclusively link seeing an algorithm in action, with algorithm aversion, and layout a methodology for future research to tease apart other confounding issues.

Questions and ideas for further exploration

  • Does this, in any way, interact with prospect theory, or other mathematical models of how humans operate under uncertainty? - I wonder if pulling mathematical models of how humans operate under uncertainty would provide a framework for better understanding these results and how exactly the situation changes when a predictions is labeled as human or non human.

  • How accurate were the algorithms? - Algorithms can perform at different degrees of accuracy. The paper says that the algorithms outperformed the subjects at these tasks, but makes it look like the advantage was small. I would love to see work digging into how algorithm accuracy effects algorithm aversion, and if there is a threshold of accuracy necessary for human acceptance. To make this easier we could "cheat" by manipulating algorithm accuracy (or laying about the existence of an algorithm)

  • Can we cheat? - Can humans identify human or algorithm predictions without the label? If so, what properties of the predictions seem to lead to associating them as human or algorithm?

  • How important is the labeling effect? - This paper begins to look into this question with an experiment where a stranger's predictions are used and the subject is given the choice between predictions labeled as human and algorithm. In this experiment, the only way the subject knows an algorithm from a human is the given label, and yet the aversion effects still held. Would algorithm aversion go away if we claimed the algorithm was a human? Would algorithm aversion affect human predictions if we claimed they were algorithms? What about other non-human intelligences, such as intelligent animals?