Home > Research > Publications & Outputs > Trustworthy AI and the Logics of Intersectional...

Electronic data

View graph of relations

Trustworthy AI and the Logics of Intersectional Resistance

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Forthcoming

Standard

Trustworthy AI and the Logics of Intersectional Resistance. / Knowles, Bran; Fledderjohann, Jasmine; Richards, John T. et al.
ACM Conference on Fairness, Accountability, and Transparency. ACM, 2023.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Knowles, B, Fledderjohann, J, Richards, JT & Varshney, KR 2023, Trustworthy AI and the Logics of Intersectional Resistance. in ACM Conference on Fairness, Accountability, and Transparency. ACM.

APA

Knowles, B., Fledderjohann, J., Richards, J. T., & Varshney, K. R. (in press). Trustworthy AI and the Logics of Intersectional Resistance. In ACM Conference on Fairness, Accountability, and Transparency ACM.

Vancouver

Knowles B, Fledderjohann J, Richards JT, Varshney KR. Trustworthy AI and the Logics of Intersectional Resistance. In ACM Conference on Fairness, Accountability, and Transparency. ACM. 2023

Author

Knowles, Bran ; Fledderjohann, Jasmine ; Richards, John T. et al. / Trustworthy AI and the Logics of Intersectional Resistance. ACM Conference on Fairness, Accountability, and Transparency. ACM, 2023.

Bibtex

@inproceedings{81b8a851248b478892500f759fb69ac3,
title = "Trustworthy AI and the Logics of Intersectional Resistance",
abstract = "Growing awareness of the capacity of AI to inflict harm has inspired efforts to delineate principles for {\textquoteleft}trustworthy AI{\textquoteright} and, from these, objective indicators of {\textquoteleft}trustworthiness{\textquoteright} for auditors and regulators. Such efforts run the risk of formalizing a distinctly privileged perspective on trustworthiness which is insensitive (or else indifferent) to the legitimate reasons for distrust held by marginalized people. By exploring a neglected conative element of trust, we broaden understandings of trust and trustworthiness to make sense of, and identify principles for responding productively to, distrust of ostensibly {\textquoteleft}trustworthy{\textquoteright} AI. Bringing social science scholarship into dialogue with AI criticism, we show that AI is being used to construct a digital underclass that is rhetorically labelled as {\textquoteleft}undeserving{\textquoteright}, and highlight how this process fulfills functions for more privileged people and institutions. We argue that distrust of AI is warranted and healthy when the AI contributes to marginalization and structural violence, and that Trustworthy AI may fuel public resistance to the use of AI unless it addresses this dimension of untrustworthiness. To this end, we offer reformulations of core principles ofTrustworthy AI—fairness, accountability, and transparency—that substantively address the deeper issues animating widespread public distrust of AI, including: stewardship and care, openness and vulnerability, and humility and empowerment. In light of evidence of strong and legitimate reasons for distrust, we call on the field to to re-evaluate why the public would embrace the expansion of AI into all corners of society; in short, what makes it worthy of their trust.",
author = "Bran Knowles and Jasmine Fledderjohann and Richards, {John T.} and Varshney, {Kush R.}",
year = "2023",
month = apr,
day = "7",
language = "English",
booktitle = "ACM Conference on Fairness, Accountability, and Transparency",
publisher = "ACM",

}

RIS

TY - GEN

T1 - Trustworthy AI and the Logics of Intersectional Resistance

AU - Knowles, Bran

AU - Fledderjohann, Jasmine

AU - Richards, John T.

AU - Varshney, Kush R.

PY - 2023/4/7

Y1 - 2023/4/7

N2 - Growing awareness of the capacity of AI to inflict harm has inspired efforts to delineate principles for ‘trustworthy AI’ and, from these, objective indicators of ‘trustworthiness’ for auditors and regulators. Such efforts run the risk of formalizing a distinctly privileged perspective on trustworthiness which is insensitive (or else indifferent) to the legitimate reasons for distrust held by marginalized people. By exploring a neglected conative element of trust, we broaden understandings of trust and trustworthiness to make sense of, and identify principles for responding productively to, distrust of ostensibly ‘trustworthy’ AI. Bringing social science scholarship into dialogue with AI criticism, we show that AI is being used to construct a digital underclass that is rhetorically labelled as ‘undeserving’, and highlight how this process fulfills functions for more privileged people and institutions. We argue that distrust of AI is warranted and healthy when the AI contributes to marginalization and structural violence, and that Trustworthy AI may fuel public resistance to the use of AI unless it addresses this dimension of untrustworthiness. To this end, we offer reformulations of core principles ofTrustworthy AI—fairness, accountability, and transparency—that substantively address the deeper issues animating widespread public distrust of AI, including: stewardship and care, openness and vulnerability, and humility and empowerment. In light of evidence of strong and legitimate reasons for distrust, we call on the field to to re-evaluate why the public would embrace the expansion of AI into all corners of society; in short, what makes it worthy of their trust.

AB - Growing awareness of the capacity of AI to inflict harm has inspired efforts to delineate principles for ‘trustworthy AI’ and, from these, objective indicators of ‘trustworthiness’ for auditors and regulators. Such efforts run the risk of formalizing a distinctly privileged perspective on trustworthiness which is insensitive (or else indifferent) to the legitimate reasons for distrust held by marginalized people. By exploring a neglected conative element of trust, we broaden understandings of trust and trustworthiness to make sense of, and identify principles for responding productively to, distrust of ostensibly ‘trustworthy’ AI. Bringing social science scholarship into dialogue with AI criticism, we show that AI is being used to construct a digital underclass that is rhetorically labelled as ‘undeserving’, and highlight how this process fulfills functions for more privileged people and institutions. We argue that distrust of AI is warranted and healthy when the AI contributes to marginalization and structural violence, and that Trustworthy AI may fuel public resistance to the use of AI unless it addresses this dimension of untrustworthiness. To this end, we offer reformulations of core principles ofTrustworthy AI—fairness, accountability, and transparency—that substantively address the deeper issues animating widespread public distrust of AI, including: stewardship and care, openness and vulnerability, and humility and empowerment. In light of evidence of strong and legitimate reasons for distrust, we call on the field to to re-evaluate why the public would embrace the expansion of AI into all corners of society; in short, what makes it worthy of their trust.

M3 - Conference contribution/Paper

BT - ACM Conference on Fairness, Accountability, and Transparency

PB - ACM

ER -