Rights statement: © Owner/Author, 2018. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in CHI '18 Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems http://dx.doi.org/10.1145/3173574.3173610
Accepted author manuscript, 1.45 MB, PDF document
Final published version
Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSN › Conference contribution/Paper › peer-review
Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSN › Conference contribution/Paper › peer-review
}
TY - GEN
T1 - Evaluation Strategies for HCI Toolkit Research
AU - Ledo, David
AU - Houben, Steven
AU - Vermeulen, Jo
AU - Marquardt, Nicolai
AU - Oehlberg, Lora
AU - Greenberg, Saul
N1 - © Owner/Author, 2018. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in CHI '18 Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems http://dx.doi.org/10.1145/3173574.3173610
PY - 2018/4/21
Y1 - 2018/4/21
N2 - Toolkit research plays an important role in the field of HCI, as it can heavily influence both the design and implementation of interactive systems. For publication, the HCI community typically expects toolkit research to include an evaluation component. The problem is that toolkit evaluation is challenging, as it is often unclear what ‘evaluating’ a toolkit means and what methods are appropriate. To address this problem, we analyzed 68 published toolkit papers. From our analysis, we provide an overview of, reflection on, and discussion of evaluation methods for toolkit contributions. We identify and discuss the value of four toolkit evaluation strategies, including the associated techniques that each employs. We offer a categorization of evaluation strategies for toolkit researchers, along with a discussion of the value, potential limitations, and trade-offs associated with each strategy.
AB - Toolkit research plays an important role in the field of HCI, as it can heavily influence both the design and implementation of interactive systems. For publication, the HCI community typically expects toolkit research to include an evaluation component. The problem is that toolkit evaluation is challenging, as it is often unclear what ‘evaluating’ a toolkit means and what methods are appropriate. To address this problem, we analyzed 68 published toolkit papers. From our analysis, we provide an overview of, reflection on, and discussion of evaluation methods for toolkit contributions. We identify and discuss the value of four toolkit evaluation strategies, including the associated techniques that each employs. We offer a categorization of evaluation strategies for toolkit researchers, along with a discussion of the value, potential limitations, and trade-offs associated with each strategy.
KW - Toolkits
KW - user interfaces
KW - prototyping
KW - design
KW - evaluation
U2 - 10.1145/3173574.3173610
DO - 10.1145/3173574.3173610
M3 - Conference contribution/Paper
SN - 9781450356206
SP - 1
EP - 17
BT - CHI '18 Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
PB - ACM
CY - New York
ER -