Home > Research > Publications & Outputs > "We Would Never Write That Down"

Electronic data

  • CSCW_2021Wewouldneversay_that_June_2020_final_submission

    Rights statement: © ACM, 2021. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of the ACM on Human-Computer Interaction - CSCW, 5, 1, 2021 http://doi.acm.org/10.1145/3449176

    Accepted author manuscript, 497 KB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

"We Would Never Write That Down": Classifications of Unemployed and Data Challenges for AI

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

"We Would Never Write That Down": Classifications of Unemployed and Data Challenges for AI. / Petersen, Anette C. M.; Christensen, Lars Rune; Harper, Richard et al.
In: Proceedings of the ACM on Human-Computer Interaction - CSCW, Vol. 5, No. CSCW1, 102, 22.04.2021, p. 1-26.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Petersen, ACM, Christensen, LR, Harper, R & Hildebrandt, T 2021, '"We Would Never Write That Down": Classifications of Unemployed and Data Challenges for AI', Proceedings of the ACM on Human-Computer Interaction - CSCW, vol. 5, no. CSCW1, 102, pp. 1-26. https://doi.org/10.1145/3449176

APA

Petersen, A. C. M., Christensen, L. R., Harper, R., & Hildebrandt, T. (2021). "We Would Never Write That Down": Classifications of Unemployed and Data Challenges for AI. Proceedings of the ACM on Human-Computer Interaction - CSCW, 5(CSCW1), 1-26. Article 102. https://doi.org/10.1145/3449176

Vancouver

Petersen ACM, Christensen LR, Harper R, Hildebrandt T. "We Would Never Write That Down": Classifications of Unemployed and Data Challenges for AI. Proceedings of the ACM on Human-Computer Interaction - CSCW. 2021 Apr 22;5(CSCW1):1-26. 102. doi: 10.1145/3449176

Author

Petersen, Anette C. M. ; Christensen, Lars Rune ; Harper, Richard et al. / "We Would Never Write That Down" : Classifications of Unemployed and Data Challenges for AI. In: Proceedings of the ACM on Human-Computer Interaction - CSCW. 2021 ; Vol. 5, No. CSCW1. pp. 1-26.

Bibtex

@article{ec6365082da14e44a161bac62c40f3e8,
title = "{"}We Would Never Write That Down{"}: Classifications of Unemployed and Data Challenges for AI",
abstract = "This paper draws attention to new complexities of deploying artificial intelligence (AI) to sensitive contexts, such as welfare allocation. AI is increasingly used in public administration with the promise of improving decision-making through predictive modelling. To accurately predict, it needs all the agreed criteria used as part of decisions, formal and informal. This paper empirically explores the informal classifications used by caseworkers to make unemployed welfare seekers 'fit' into the formal categories applied in a Danish job centre. Our findings show that these classifications are documentable, and hence traceable to AI. However, to the caseworkers, they are at odds with the stable explanations assumed by any bureaucratic recording system as they involve negotiated and situated judgments of people's character. Thus, for moral reasons, caseworkers find them ill-suited for formal representation and predictive purposes and choose not to write them down. As a result, although classification work is crucial to the job centre's activities, AI is denuded of the real-world (and real work) character of decision-making in this context. This is an important finding for CSCW as it is not only about whether AI can 'do' decision-making in particular contexts, as previous research has argued. This paper shows that problems may also be caused by people's unwillingness to provide data to systems. It is the purpose of this paper to present the empirical results of this research, followed by a discussion of implications for AI-supported practice and research.",
keywords = "Computer Networks and Communications, Human-Computer Interaction, Social Sciences (miscellaneous)",
author = "Petersen, {Anette C. M.} and Christensen, {Lars Rune} and Richard Harper and Thomas Hildebrandt",
note = "{\textcopyright} ACM, 2021. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of the ACM on Human-Computer Interaction - CSCW, 5, 1, 2021 http://doi.acm.org/10.1145/3449176",
year = "2021",
month = apr,
day = "22",
doi = "10.1145/3449176",
language = "English",
volume = "5",
pages = "1--26",
journal = "Proceedings of the ACM on Human-Computer Interaction - CSCW",
issn = "2573-0142",
publisher = "Association for Computing Machinery (ACM)",
number = "CSCW1",

}

RIS

TY - JOUR

T1 - "We Would Never Write That Down"

T2 - Classifications of Unemployed and Data Challenges for AI

AU - Petersen, Anette C. M.

AU - Christensen, Lars Rune

AU - Harper, Richard

AU - Hildebrandt, Thomas

N1 - © ACM, 2021. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of the ACM on Human-Computer Interaction - CSCW, 5, 1, 2021 http://doi.acm.org/10.1145/3449176

PY - 2021/4/22

Y1 - 2021/4/22

N2 - This paper draws attention to new complexities of deploying artificial intelligence (AI) to sensitive contexts, such as welfare allocation. AI is increasingly used in public administration with the promise of improving decision-making through predictive modelling. To accurately predict, it needs all the agreed criteria used as part of decisions, formal and informal. This paper empirically explores the informal classifications used by caseworkers to make unemployed welfare seekers 'fit' into the formal categories applied in a Danish job centre. Our findings show that these classifications are documentable, and hence traceable to AI. However, to the caseworkers, they are at odds with the stable explanations assumed by any bureaucratic recording system as they involve negotiated and situated judgments of people's character. Thus, for moral reasons, caseworkers find them ill-suited for formal representation and predictive purposes and choose not to write them down. As a result, although classification work is crucial to the job centre's activities, AI is denuded of the real-world (and real work) character of decision-making in this context. This is an important finding for CSCW as it is not only about whether AI can 'do' decision-making in particular contexts, as previous research has argued. This paper shows that problems may also be caused by people's unwillingness to provide data to systems. It is the purpose of this paper to present the empirical results of this research, followed by a discussion of implications for AI-supported practice and research.

AB - This paper draws attention to new complexities of deploying artificial intelligence (AI) to sensitive contexts, such as welfare allocation. AI is increasingly used in public administration with the promise of improving decision-making through predictive modelling. To accurately predict, it needs all the agreed criteria used as part of decisions, formal and informal. This paper empirically explores the informal classifications used by caseworkers to make unemployed welfare seekers 'fit' into the formal categories applied in a Danish job centre. Our findings show that these classifications are documentable, and hence traceable to AI. However, to the caseworkers, they are at odds with the stable explanations assumed by any bureaucratic recording system as they involve negotiated and situated judgments of people's character. Thus, for moral reasons, caseworkers find them ill-suited for formal representation and predictive purposes and choose not to write them down. As a result, although classification work is crucial to the job centre's activities, AI is denuded of the real-world (and real work) character of decision-making in this context. This is an important finding for CSCW as it is not only about whether AI can 'do' decision-making in particular contexts, as previous research has argued. This paper shows that problems may also be caused by people's unwillingness to provide data to systems. It is the purpose of this paper to present the empirical results of this research, followed by a discussion of implications for AI-supported practice and research.

KW - Computer Networks and Communications

KW - Human-Computer Interaction

KW - Social Sciences (miscellaneous)

U2 - 10.1145/3449176

DO - 10.1145/3449176

M3 - Journal article

VL - 5

SP - 1

EP - 26

JO - Proceedings of the ACM on Human-Computer Interaction - CSCW

JF - Proceedings of the ACM on Human-Computer Interaction - CSCW

SN - 2573-0142

IS - CSCW1

M1 - 102

ER -