Home > Research > Publications & Outputs > Explanation Strategies as an Empirical-Analytic...

Links

Text available via DOI:

View graph of relations

Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability. / Benjamin, Jesse; Kinkeldey, Christoph; Müller-Birn, Claudia et al.
In: Proceedings of the ACM on Human-Computer Interaction, Vol. 6, 14.01.2022, p. 39:1-39:25.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Benjamin, J, Kinkeldey, C, Müller-Birn, C, Korjakow, T & Herbst, E-M 2022, 'Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability', Proceedings of the ACM on Human-Computer Interaction, vol. 6, pp. 39:1-39:25. https://doi.org/10.1145/3492858

APA

Benjamin, J., Kinkeldey, C., Müller-Birn, C., Korjakow, T., & Herbst, E.-M. (2022). Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability. Proceedings of the ACM on Human-Computer Interaction, 6, 39:1-39:25. https://doi.org/10.1145/3492858

Vancouver

Benjamin J, Kinkeldey C, Müller-Birn C, Korjakow T, Herbst EM. Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability. Proceedings of the ACM on Human-Computer Interaction. 2022 Jan 14;6:39:1-39:25. doi: 10.1145/3492858

Author

Benjamin, Jesse ; Kinkeldey, Christoph ; Müller-Birn, Claudia et al. / Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability. In: Proceedings of the ACM on Human-Computer Interaction. 2022 ; Vol. 6. pp. 39:1-39:25.

Bibtex

@article{43a8440bf81b4f8f8ac0e1f3649a813b,
title = "Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability",
abstract = "During a research project in which we developed a machine learning (ML) driven visualization system for non-ML experts, we reflected on interpretability research in ML, computer-supported collaborative work and human-computer interaction. We found that while there are manifold technical approaches, these often focus on ML experts and are evaluated in decontextualized empirical studies. We hypothesized that participatory design research may support the understanding of stakeholders' situated sense-making in our project, yet, found guidance regarding ML interpretability inexhaustive. Building on philosophy of technology, we formulated explanation strategies as an empirical-analytical lens explicating how technical explanations mediate the contextual preferences concerning people's interpretations. In this paper, we contribute a report of our proof-of-concept use of explanation strategies to analyze a co-design workshop with non-ML experts, methodological implications for participatory design research, design implications for explanations for non-ML experts and suggest further investigation of technological mediation theories in the ML interpretability space.",
author = "Jesse Benjamin and Christoph Kinkeldey and Claudia M{\"u}ller-Birn and Tim Korjakow and Eva-Maria Herbst",
year = "2022",
month = jan,
day = "14",
doi = "10.1145/3492858",
language = "English",
volume = "6",
pages = "39:1--39:25",
journal = "Proceedings of the ACM on Human-Computer Interaction",
publisher = "ACM",

}

RIS

TY - JOUR

T1 - Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability

AU - Benjamin, Jesse

AU - Kinkeldey, Christoph

AU - Müller-Birn, Claudia

AU - Korjakow, Tim

AU - Herbst, Eva-Maria

PY - 2022/1/14

Y1 - 2022/1/14

N2 - During a research project in which we developed a machine learning (ML) driven visualization system for non-ML experts, we reflected on interpretability research in ML, computer-supported collaborative work and human-computer interaction. We found that while there are manifold technical approaches, these often focus on ML experts and are evaluated in decontextualized empirical studies. We hypothesized that participatory design research may support the understanding of stakeholders' situated sense-making in our project, yet, found guidance regarding ML interpretability inexhaustive. Building on philosophy of technology, we formulated explanation strategies as an empirical-analytical lens explicating how technical explanations mediate the contextual preferences concerning people's interpretations. In this paper, we contribute a report of our proof-of-concept use of explanation strategies to analyze a co-design workshop with non-ML experts, methodological implications for participatory design research, design implications for explanations for non-ML experts and suggest further investigation of technological mediation theories in the ML interpretability space.

AB - During a research project in which we developed a machine learning (ML) driven visualization system for non-ML experts, we reflected on interpretability research in ML, computer-supported collaborative work and human-computer interaction. We found that while there are manifold technical approaches, these often focus on ML experts and are evaluated in decontextualized empirical studies. We hypothesized that participatory design research may support the understanding of stakeholders' situated sense-making in our project, yet, found guidance regarding ML interpretability inexhaustive. Building on philosophy of technology, we formulated explanation strategies as an empirical-analytical lens explicating how technical explanations mediate the contextual preferences concerning people's interpretations. In this paper, we contribute a report of our proof-of-concept use of explanation strategies to analyze a co-design workshop with non-ML experts, methodological implications for participatory design research, design implications for explanations for non-ML experts and suggest further investigation of technological mediation theories in the ML interpretability space.

U2 - 10.1145/3492858

DO - 10.1145/3492858

M3 - Journal article

VL - 6

SP - 39:1-39:25

JO - Proceedings of the ACM on Human-Computer Interaction

JF - Proceedings of the ACM on Human-Computer Interaction

ER -