Rights statement: © ACM, 2018. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in CHI 2018 - Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, 2018 http://doi.acm.org/10.1145/3170427.3185371
Accepted author manuscript, 155 KB, PDF document
Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License
Final published version
Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSN › Conference contribution/Paper › peer-review
Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSN › Conference contribution/Paper › peer-review
}
TY - GEN
T1 - Evaluating HCI research beyond usability
AU - Remy, Christian
AU - Bates, Oliver
AU - Mankoff, Jennifer
AU - Friday, Adrian
N1 - © ACM, 2018. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in CHI 2018 - Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, 2018 http://doi.acm.org/10.1145/3170427.3185371
PY - 2018/4/20
Y1 - 2018/4/20
N2 - Evaluating research artefacts is an important step to showcase the validity of a chosen approach. The CHI community has developed and agreed upon a large variety of evaluation methods for HCI research; however, sometimes those methods are not applicable or not sufficient. This is especially the case when the contribution lies within the context of the application area, such as for research in sustainable HCI, HCI for development, or design fiction and futures studies. In this SIG, we invite the CHI community to share their insights from projects that encountered problems in evaluating research and aim to discuss solutions for this difficult topic. We invite researchers from all areas of HCI research who are interested to engage in a debate of issues in the process of validating research artefacts.
AB - Evaluating research artefacts is an important step to showcase the validity of a chosen approach. The CHI community has developed and agreed upon a large variety of evaluation methods for HCI research; however, sometimes those methods are not applicable or not sufficient. This is especially the case when the contribution lies within the context of the application area, such as for research in sustainable HCI, HCI for development, or design fiction and futures studies. In this SIG, we invite the CHI community to share their insights from projects that encountered problems in evaluating research and aim to discuss solutions for this difficult topic. We invite researchers from all areas of HCI research who are interested to engage in a debate of issues in the process of validating research artefacts.
KW - Design Fiction
KW - Evaluation
KW - Futures Studies
KW - HCI4D
KW - Research Methods
KW - Sustainable HCI
KW - Validation
U2 - 10.1145/3170427.3185371
DO - 10.1145/3170427.3185371
M3 - Conference contribution/Paper
AN - SCOPUS:85052025182
T3 - Conference on Human Factors in Computing Systems - Proceedings
BT - CHI 2018 - Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems
PB - Association for Computing Machinery (ACM)
T2 - 2018 CHI Conference on Human Factors in Computing Systems, CHI EA 2018
Y2 - 21 April 2018 through 26 April 2018
ER -