Home > Research > Publications & Outputs > Evaluation Strategies for HCI Toolkit Research

Electronic data

  • Evaluation Strategies in HCI Toolkit Research - FINAL

    Rights statement: © Owner/Author, 2018. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in CHI '18 Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems http://dx.doi.org/10.1145/3173574.3173610

    Accepted author manuscript, 1.45 MB, PDF document

Links

Text available via DOI:

View graph of relations

Evaluation Strategies for HCI Toolkit Research

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Evaluation Strategies for HCI Toolkit Research. / Ledo, David; Houben, Steven; Vermeulen, Jo et al.
CHI '18 Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. New York: ACM, 2018. p. 1-17 36.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Ledo, D, Houben, S, Vermeulen, J, Marquardt, N, Oehlberg, L & Greenberg, S 2018, Evaluation Strategies for HCI Toolkit Research. in CHI '18 Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems., 36, ACM, New York, pp. 1-17. https://doi.org/10.1145/3173574.3173610

APA

Ledo, D., Houben, S., Vermeulen, J., Marquardt, N., Oehlberg, L., & Greenberg, S. (2018). Evaluation Strategies for HCI Toolkit Research. In CHI '18 Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-17). Article 36 ACM. https://doi.org/10.1145/3173574.3173610

Vancouver

Ledo D, Houben S, Vermeulen J, Marquardt N, Oehlberg L, Greenberg S. Evaluation Strategies for HCI Toolkit Research. In CHI '18 Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. New York: ACM. 2018. p. 1-17. 36 doi: 10.1145/3173574.3173610

Author

Ledo, David ; Houben, Steven ; Vermeulen, Jo et al. / Evaluation Strategies for HCI Toolkit Research. CHI '18 Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. New York : ACM, 2018. pp. 1-17

Bibtex

@inproceedings{2fb5b0b9036d4c5bbcdc478bb48e509e,
title = "Evaluation Strategies for HCI Toolkit Research",
abstract = "Toolkit research plays an important role in the field of HCI, as it can heavily influence both the design and implementation of interactive systems. For publication, the HCI community typically expects toolkit research to include an evaluation component. The problem is that toolkit evaluation is challenging, as it is often unclear what {\textquoteleft}evaluating{\textquoteright} a toolkit means and what methods are appropriate. To address this problem, we analyzed 68 published toolkit papers. From our analysis, we provide an overview of, reflection on, and discussion of evaluation methods for toolkit contributions. We identify and discuss the value of four toolkit evaluation strategies, including the associated techniques that each employs. We offer a categorization of evaluation strategies for toolkit researchers, along with a discussion of the value, potential limitations, and trade-offs associated with each strategy.",
keywords = "Toolkits, user interfaces, prototyping, design, evaluation",
author = "David Ledo and Steven Houben and Jo Vermeulen and Nicolai Marquardt and Lora Oehlberg and Saul Greenberg",
note = "{\textcopyright} Owner/Author, 2018. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in CHI '18 Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems http://dx.doi.org/10.1145/3173574.3173610",
year = "2018",
month = apr,
day = "21",
doi = "10.1145/3173574.3173610",
language = "English",
isbn = "9781450356206",
pages = "1--17",
booktitle = "CHI '18 Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems",
publisher = "ACM",

}

RIS

TY - GEN

T1 - Evaluation Strategies for HCI Toolkit Research

AU - Ledo, David

AU - Houben, Steven

AU - Vermeulen, Jo

AU - Marquardt, Nicolai

AU - Oehlberg, Lora

AU - Greenberg, Saul

N1 - © Owner/Author, 2018. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in CHI '18 Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems http://dx.doi.org/10.1145/3173574.3173610

PY - 2018/4/21

Y1 - 2018/4/21

N2 - Toolkit research plays an important role in the field of HCI, as it can heavily influence both the design and implementation of interactive systems. For publication, the HCI community typically expects toolkit research to include an evaluation component. The problem is that toolkit evaluation is challenging, as it is often unclear what ‘evaluating’ a toolkit means and what methods are appropriate. To address this problem, we analyzed 68 published toolkit papers. From our analysis, we provide an overview of, reflection on, and discussion of evaluation methods for toolkit contributions. We identify and discuss the value of four toolkit evaluation strategies, including the associated techniques that each employs. We offer a categorization of evaluation strategies for toolkit researchers, along with a discussion of the value, potential limitations, and trade-offs associated with each strategy.

AB - Toolkit research plays an important role in the field of HCI, as it can heavily influence both the design and implementation of interactive systems. For publication, the HCI community typically expects toolkit research to include an evaluation component. The problem is that toolkit evaluation is challenging, as it is often unclear what ‘evaluating’ a toolkit means and what methods are appropriate. To address this problem, we analyzed 68 published toolkit papers. From our analysis, we provide an overview of, reflection on, and discussion of evaluation methods for toolkit contributions. We identify and discuss the value of four toolkit evaluation strategies, including the associated techniques that each employs. We offer a categorization of evaluation strategies for toolkit researchers, along with a discussion of the value, potential limitations, and trade-offs associated with each strategy.

KW - Toolkits

KW - user interfaces

KW - prototyping

KW - design

KW - evaluation

U2 - 10.1145/3173574.3173610

DO - 10.1145/3173574.3173610

M3 - Conference contribution/Paper

SN - 9781450356206

SP - 1

EP - 17

BT - CHI '18 Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems

PB - ACM

CY - New York

ER -