Academic HCI over-emphasizes evaluation

In academic Human-computer interaction, it’s often difficult to publish a systems paper without a quantitative evaluation. What’s so silly about this is that the implementations of these systems are often so unpolished that, as Bret Victor points out, the expected differences in simply doing a well-designed version of the same idea would exceed the measurement variance.

This field is trying too hard to be a science. Perhaps it can become a design science (per Herb Simon), but these evaluations rarely make sense as a natural science.

References

  • The Tyranny of Evaluation - Henry Lieberman
  • Pike, R. (2000). Systems Software Research is Irrelevant
    • “Too much phenomenology: invention has been replaced by observation. Today we see papers comparing interrupt latency on Linux vs. Windows. They may be interesting, they may even be relevant, but they aren’t research. In a misguided attempt to seem scientific, there’s too much measurement: performance minutiae and bad charts. … The art is gone. But art is not science, and that’s part of the point. Systems research cannot be just science; there must be engineering, design, and art.”
  • Lu Wilson on “Academia from the outside” suggests we might want something “academ-ish” between papers and blog posts
  • Brooks, R. (2018). A Brave, Creative, and Happy HRI. ACM Transactions on Human-Robot Interaction, 7(1), 1–3. https://doi.org/10.1145/3209540
    • “There are two approaches that seem to be accepted by the community. In one, some ten or twenty real live people are recruited and brought into a lab and are asked to interact with either a lab demonstration robot or a Wizard of Oz simulation of a robot. In the second, a few hundred workers from Mechanical Turk are used to fill out questionnaires on something that they can see through a web interface. No robots are even present in this second case.”

Queue

  • Greenberg, S., & Buxton, B. (2008). Usability evaluation considered harmful (some of the time). Proceeding of the Twenty-Sixth Annual CHI Conference on Human Factors in Computing Systems - CHI ’08, 111. https://doi.org/10.1145/1357054.1357074
  • Olsen, D. R. (2007). Evaluating user interface systems research. Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology - UIST ’07, 251. https://doi.org/10.1145/1294211.1294256
Last updated 2024-07-05.