This paper draws attention to new complexities of deploying artificial intelligence (AI) to sensitive contexts, such as welfare allocation. AI is increasingly used in public administration with the promise of improving decision-making through predictive modelling. To accurately predict, it needs all the agreed criteria used as part of decisions, formal and informal. This paper empirically explores the informal classifications used by caseworkers to make unemployed welfare seekers 'fit' into the formal categories applied in a Danish job centre. Our findings show that these classifications are documentable, and hence traceable to AI. However, to the caseworkers, they are at odds with the stable explanations assumed by any bureaucratic recording system as they involve negotiated and situated judgments of people's character. Thus, for moral reasons, caseworkers find them ill-suited for formal representation and predictive purposes and choose not to write them down. As a result, although classification work is crucial to the job centre's activities, AI is denuded of the real-world (and real work) character of decision-making in this context. This is an important finding for CSCW as it is not only about whether AI can 'do' decision-making in particular contexts, as previous research has argued. This paper shows that problems may also be caused by people's unwillingness to provide data to systems. It is the purpose of this paper to present the empirical results of this research, followed by a discussion of implications for AI-supported practice and research.
© ACM, 2021. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in
Proceedings of the ACM on Human-Computer Interaction - CSCW, 5, 1, 2021 http://doi.acm.org/10.1145/3449176