Home > Research > Publications & Outputs > Humble Machines

Electronic data

  • EAAMO_22_paper_46

    Accepted author manuscript, 613 KB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

View graph of relations

Humble Machines: Attending to the Underappreciated Costs of Misplaced Distrust

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

E-pub ahead of print

Standard

Humble Machines: Attending to the Underappreciated Costs of Misplaced Distrust. / Knowles, Bran.
2022 ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization. New York: ACM, 2022.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Knowles, B 2022, Humble Machines: Attending to the Underappreciated Costs of Misplaced Distrust. in 2022 ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization. ACM, New York. <https://arxiv.org/abs/2208.01305>

APA

Knowles, B. (2022). Humble Machines: Attending to the Underappreciated Costs of Misplaced Distrust. In 2022 ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization ACM. Advance online publication. https://arxiv.org/abs/2208.01305

Vancouver

Knowles B. Humble Machines: Attending to the Underappreciated Costs of Misplaced Distrust. In 2022 ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization. New York: ACM. 2022 Epub 2022 Aug 2.

Author

Knowles, Bran. / Humble Machines : Attending to the Underappreciated Costs of Misplaced Distrust. 2022 ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization. New York : ACM, 2022.

Bibtex

@inproceedings{6971a62040a64bc783b4e24aed4bbaf6,
title = "Humble Machines: Attending to the Underappreciated Costs of Misplaced Distrust",
abstract = "It is curious that AI increasingly outperforms human decision makers, yet much of the public distrusts AI to make decisions affecting their lives. In this paper we explore a novel theory that may explain one reason for this. We propose that public distrust of AI is a moral consequence of designing systems that prioritize reduction of costs of false positives over less tangible costs of false negatives. We show that such systems, which we characterize as 'distrustful', are more likely to miscategorize trustworthy individuals, with cascading consequences to both those individuals and the overall human-AI trust relationship. Ultimately, we argue that public distrust of AI stems from well-founded concern about the potential of being miscategorized. We propose that restoring public trust in AI will require that systems are designed to embody a stance of 'humble trust', whereby the moral costs of the misplaced distrust associated with false negatives is weighted appropriately during development and use.",
author = "Bran Knowles",
year = "2022",
month = aug,
day = "2",
language = "English",
booktitle = "2022 ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization",
publisher = "ACM",

}

RIS

TY - GEN

T1 - Humble Machines

T2 - Attending to the Underappreciated Costs of Misplaced Distrust

AU - Knowles, Bran

PY - 2022/8/2

Y1 - 2022/8/2

N2 - It is curious that AI increasingly outperforms human decision makers, yet much of the public distrusts AI to make decisions affecting their lives. In this paper we explore a novel theory that may explain one reason for this. We propose that public distrust of AI is a moral consequence of designing systems that prioritize reduction of costs of false positives over less tangible costs of false negatives. We show that such systems, which we characterize as 'distrustful', are more likely to miscategorize trustworthy individuals, with cascading consequences to both those individuals and the overall human-AI trust relationship. Ultimately, we argue that public distrust of AI stems from well-founded concern about the potential of being miscategorized. We propose that restoring public trust in AI will require that systems are designed to embody a stance of 'humble trust', whereby the moral costs of the misplaced distrust associated with false negatives is weighted appropriately during development and use.

AB - It is curious that AI increasingly outperforms human decision makers, yet much of the public distrusts AI to make decisions affecting their lives. In this paper we explore a novel theory that may explain one reason for this. We propose that public distrust of AI is a moral consequence of designing systems that prioritize reduction of costs of false positives over less tangible costs of false negatives. We show that such systems, which we characterize as 'distrustful', are more likely to miscategorize trustworthy individuals, with cascading consequences to both those individuals and the overall human-AI trust relationship. Ultimately, we argue that public distrust of AI stems from well-founded concern about the potential of being miscategorized. We propose that restoring public trust in AI will require that systems are designed to embody a stance of 'humble trust', whereby the moral costs of the misplaced distrust associated with false negatives is weighted appropriately during development and use.

M3 - Conference contribution/Paper

BT - 2022 ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization

PB - ACM

CY - New York

ER -