Photograph
Daniel Kluver

about

blog

research

CV


contact

email

twitter

Human Centered Algorithm Research

There's a lot of talk in the HCI community about human centered approaches such as human centered design methodologies. The human centered approaches makes understanding the needs and wants of humans a core part of the research. The insight behind this is that to make things that truly matter to people, those people, and their contexts, need to be truly understood. By putting in the effort to understand the people who will interact with a design, a better, more meaningful design can be created.

When people talk about human centered design, they don't normally think about human centered algorithm design. In fact, some researchers might think that research improving intelligent algorithms and algorithm driven systems cannot be human centered. There is a (perhaps well deserved) stereotype in the community that algorithm driven research is focused only on making a number go up. Despite this, I think that human centered algorithms research is possible. Not only that, but I think that there are researchers out there making human centered algorithms research their primary goal who are doing important work. Most importantly, I think that as a community of researchers we need to talk about and better understand what it means to do human centered algorithms research.

One way to think about human centered algorithm research is as a response to failings in traditional metric centered algorithms research. In this regard, I'm not the first person to want to see more meaningful work from our brightest algorithmic thinkers. Without doing a long literature search two papers that take this type of perspective are Machine Learning That Matters(Kiri L. Wagstaff) and Making recommendations better: an analytic model for human-recommender interaction(Sean M. McNee, John Riedl, Joeseph A. Konstan). Both papers advocate for a more human centered approach to algorithms driven research. In Machine Learning That Matters Kiri calls out the machine learning field for hyper focus on a small number of abstract metrics and lack of follow through and domain involvement in evaluation. In Making Recommendations Better is one of several papers in the recommender systems field that calls for more holistic evaluations of recommender systems. I recommend both reads (depending on your field), or looking for papers in your problem domain that are similar, laying out concrete recommendations for how to approach algorithm work in your domain while keeping relevant. If you can't find similar papers maybe you can be the one to write it.

What is human centered algorithm research?

All algorithm research has the goal of improving our understanding of how to design algorithms for a given task. Traditional algorithms work approaches this by trying to deeply understand the task and then formally optimize some model against some usually arbitrary measure of success at that task. Human centered algorithm work goes beyond this and focuses on the context where an algorithm is used, both the system surrounding it, and the humans who interact with it. By taking this focus the measure of success for an algorithm changes from an arbitrary measurement, to the algorithms ability to serve the humans that interact with it.

Traditional algorithm research is great at designing algorithms for a given well defined set of inputs, outputs, and metrics. Human centered algorithm work is great at designing algorithms when the inputs, outputs, or metrics of the system are not well defined. By taking a human centered approach, an algorithm designer can find:

  • New properties in the world that are important to understanding the problem the algorithm is trying to address, but have been historically ignored.
  • Fundamentally new algorithm problems that, when solved, would offer greater support to the users than common approaches.
  • New properties of the algorithm or its outputs that are important to how humans interact with the algorithm
  • Flaws with how we understand the inputs to our algorithms, or flaws in how we expect users to use the outputs.
  • Better evaluation metrics that align more closely with actual human needs.

An example of this might be instructive. If you are a regular reader of this blog you probably are aware that I am a recommender systems researcher. In the recommender systems field we have been studying how to make automatic recommendations for longer than I've been alive (patents for early collaborative filtering algorithms were filed before I was born). In this time our approaches have grown in complexity from relatively straightforward user based algorithms (You and Bob like the same things, Bob liked this so I will recommend it to you) through elaborate latent factor models to increasingly complex learning to rank algorithms. All this development has tended to share one common feature, however. All of these algorithms have been developed explicitly to maximize some invented metric or another that we think will improve recommendations, and to be fair, this development has been quite successful. However, we have long feared that we are reaching the limits of the improvements this type of work can make.

Some researchers, however, have begun looking at other aspects of recommendations than just their ability to recall good items. In particular, I want to call out the early work studying the effect of recommendation diversity on user satisfaction. Two good papers for this are Improving recommendation lists through topic diversification (Cai-Nicolas Ziegler, Sean M. McNee, Joseph A. Konstan, Georg Lausen) and Using latent features diversification to reduce choice difficulty in recommendation lists (Martijn C. Willemsen , Bart P. Knijnenburg , Mark P. Graus, Linda C.M. Velter-Bremmers , Kai Fu). Both papers, I would say, take a human centered approach to understanding an algorithmic issue: the effect of intentionally introducing diversity to a recommender. Improving recommendation lists through topic diversification shows that deliberately introducing diversity into recommendations harms the quality of recommendations according to traditional metrics. It goes on, however, to show that for moderate levels of added diversity users prefer the diversity added lists. Latent feature diversification to reduce choice difficulty expands on this using carefully analyzed survey methodologies to understand how diversity of recommendations can help users more easily choose items from a recommended set. These works identified core aspects of the recommendation problem, as it related to serving human needs, that are not captured by traditional evaluations.

How to perform human centered algorithms work.

At this point I want to go into more detail about human centered algorithm work. I want to give you a step by step guide of how to do human centered algorithm work. I want to give you a list of human centered techniques for algorithm improvement and design.

I can't give you these things. I'm still trying to figure out the boundaries of this "human centered algorithms research" idea, and until we have a discussion as a field I don't think anyone can really put together concrete guidelines.

What I can say to algorithms researchers and developers is: don't focus on the algorithm, focus on the system. Without thinking about the context of the algorithm, where its inputs come from, and where its outputs go, you will never find the humans in the algorithm you are building. Once you found those humans, think about them, think about what their actions mean, and what they need from the system that the algorithm provides. Hopefully this will help you do more impactful algorithms work.

For everyone else, I hope you find this idea as compelling as I do. I would love to hear what you think about the idea of classifying human centered algorithms work as its own type of work. I would love to hear your stories and examples of what human centered algorithms work means to you. As always feel free to reach out by email or through twitter with any comments by using the contact links on the left.