Home > Research > Publications & Outputs > Humble Machines

Electronic data

  • EAAMO_22_paper_46

    Accepted author manuscript, 613 KB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

View graph of relations

Humble Machines: Attending to the Underappreciated Costs of Misplaced Distrust

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

E-pub ahead of print
Publication date2/08/2022
Host publication2022 ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization
Place of PublicationNew York
PublisherACM
Number of pages11
<mark>Original language</mark>English

Abstract

It is curious that AI increasingly outperforms human decision makers, yet much of the public distrusts AI to make decisions affecting their lives. In this paper we explore a novel theory that may explain one reason for this. We propose that public distrust of AI is a moral consequence of designing systems that prioritize reduction of costs of false positives over less tangible costs of false negatives. We show that such systems, which we characterize as 'distrustful', are more likely to miscategorize trustworthy individuals, with cascading consequences to both those individuals and the overall human-AI trust relationship. Ultimately, we argue that public distrust of AI stems from well-founded concern about the potential of being miscategorized. We propose that restoring public trust in AI will require that systems are designed to embody a stance of 'humble trust', whereby the moral costs of the misplaced distrust associated with false negatives is weighted appropriately during development and use.