Home > Research > Publications & Outputs > "We Would Never Write That Down"

Electronic data

  • CSCW_2021Wewouldneversay_that_June_2020_final_submission

    Rights statement: © ACM, 2021. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of the ACM on Human-Computer Interaction - CSCW, 5, 1, 2021 http://doi.acm.org/10.1145/3449176

    Accepted author manuscript, 497 KB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

"We Would Never Write That Down": Classifications of Unemployed and Data Challenges for AI

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • Anette C. M. Petersen
  • Lars Rune Christensen
  • Richard Harper
  • Thomas Hildebrandt
Close
Article number102
<mark>Journal publication date</mark>22/04/2021
<mark>Journal</mark>Proceedings of the ACM on Human-Computer Interaction - CSCW
Issue numberCSCW1
Volume5
Number of pages26
Pages (from-to)1-26
Publication StatusPublished
<mark>Original language</mark>English

Abstract

This paper draws attention to new complexities of deploying artificial intelligence (AI) to sensitive contexts, such as welfare allocation. AI is increasingly used in public administration with the promise of improving decision-making through predictive modelling. To accurately predict, it needs all the agreed criteria used as part of decisions, formal and informal. This paper empirically explores the informal classifications used by caseworkers to make unemployed welfare seekers 'fit' into the formal categories applied in a Danish job centre. Our findings show that these classifications are documentable, and hence traceable to AI. However, to the caseworkers, they are at odds with the stable explanations assumed by any bureaucratic recording system as they involve negotiated and situated judgments of people's character. Thus, for moral reasons, caseworkers find them ill-suited for formal representation and predictive purposes and choose not to write them down. As a result, although classification work is crucial to the job centre's activities, AI is denuded of the real-world (and real work) character of decision-making in this context. This is an important finding for CSCW as it is not only about whether AI can 'do' decision-making in particular contexts, as previous research has argued. This paper shows that problems may also be caused by people's unwillingness to provide data to systems. It is the purpose of this paper to present the empirical results of this research, followed by a discussion of implications for AI-supported practice and research.

Bibliographic note

© ACM, 2021. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of the ACM on Human-Computer Interaction - CSCW, 5, 1, 2021 http://doi.acm.org/10.1145/3449176