Rights statement: © ACM, 2021. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of the ACM on Human-Computer Interaction - CSCW, 5, 1, 2021 http://doi.acm.org/10.1145/3449176
Accepted author manuscript, 497 KB, PDF document
Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License
Final published version
Research output: Contribution to Journal/Magazine › Journal article › peer-review
Research output: Contribution to Journal/Magazine › Journal article › peer-review
}
TY - JOUR
T1 - "We Would Never Write That Down"
T2 - Classifications of Unemployed and Data Challenges for AI
AU - Petersen, Anette C. M.
AU - Christensen, Lars Rune
AU - Harper, Richard
AU - Hildebrandt, Thomas
N1 - © ACM, 2021. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of the ACM on Human-Computer Interaction - CSCW, 5, 1, 2021 http://doi.acm.org/10.1145/3449176
PY - 2021/4/22
Y1 - 2021/4/22
N2 - This paper draws attention to new complexities of deploying artificial intelligence (AI) to sensitive contexts, such as welfare allocation. AI is increasingly used in public administration with the promise of improving decision-making through predictive modelling. To accurately predict, it needs all the agreed criteria used as part of decisions, formal and informal. This paper empirically explores the informal classifications used by caseworkers to make unemployed welfare seekers 'fit' into the formal categories applied in a Danish job centre. Our findings show that these classifications are documentable, and hence traceable to AI. However, to the caseworkers, they are at odds with the stable explanations assumed by any bureaucratic recording system as they involve negotiated and situated judgments of people's character. Thus, for moral reasons, caseworkers find them ill-suited for formal representation and predictive purposes and choose not to write them down. As a result, although classification work is crucial to the job centre's activities, AI is denuded of the real-world (and real work) character of decision-making in this context. This is an important finding for CSCW as it is not only about whether AI can 'do' decision-making in particular contexts, as previous research has argued. This paper shows that problems may also be caused by people's unwillingness to provide data to systems. It is the purpose of this paper to present the empirical results of this research, followed by a discussion of implications for AI-supported practice and research.
AB - This paper draws attention to new complexities of deploying artificial intelligence (AI) to sensitive contexts, such as welfare allocation. AI is increasingly used in public administration with the promise of improving decision-making through predictive modelling. To accurately predict, it needs all the agreed criteria used as part of decisions, formal and informal. This paper empirically explores the informal classifications used by caseworkers to make unemployed welfare seekers 'fit' into the formal categories applied in a Danish job centre. Our findings show that these classifications are documentable, and hence traceable to AI. However, to the caseworkers, they are at odds with the stable explanations assumed by any bureaucratic recording system as they involve negotiated and situated judgments of people's character. Thus, for moral reasons, caseworkers find them ill-suited for formal representation and predictive purposes and choose not to write them down. As a result, although classification work is crucial to the job centre's activities, AI is denuded of the real-world (and real work) character of decision-making in this context. This is an important finding for CSCW as it is not only about whether AI can 'do' decision-making in particular contexts, as previous research has argued. This paper shows that problems may also be caused by people's unwillingness to provide data to systems. It is the purpose of this paper to present the empirical results of this research, followed by a discussion of implications for AI-supported practice and research.
KW - Computer Networks and Communications
KW - Human-Computer Interaction
KW - Social Sciences (miscellaneous)
U2 - 10.1145/3449176
DO - 10.1145/3449176
M3 - Journal article
VL - 5
SP - 1
EP - 26
JO - Proceedings of the ACM on Human-Computer Interaction - CSCW
JF - Proceedings of the ACM on Human-Computer Interaction - CSCW
SN - 2573-0142
IS - CSCW1
M1 - 102
ER -