Home > Research > Publications & Outputs > Evaluation Strategies for HCI Toolkit Research

Electronic data

View graph of relations

Evaluation Strategies for HCI Toolkit Research

Research output: Contribution in Book/Report/ProceedingsConference contribution

Forthcoming
  • David Ledo
  • Steven Houben
  • Jo Vermeulen
  • Nicolai Marquardt
  • Lora Oehlberg
  • Saul Greenberg
Close
Publication date7/01/2018
Host publicationCHI '18 Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
Place of PublicationNew York
<mark>Original language</mark>English

Abstract

Toolkit research plays an important role in the field of HCI, as it can heavily influence both the design and implementation of interactive systems. For publication, the HCI community typically expects toolkit research to include an evaluation component. The problem is that toolkit evaluation is challenging, as it is often unclear what ‘evaluating’ a toolkit means and what methods are appropriate. To address this problem, we analyzed 68 published toolkit papers. From our analysis, we provide an overview of, reflection on, and discussion of evaluation methods for toolkit contributions. We identify and discuss the value of four toolkit evaluation strategies, including the associated techniques that each employs. We offer a categorization of evaluation strategies for toolkit researchers, along with a discussion of the value, potential limitations, and trade-offs associated with each strategy.