Home > Research > Publications & Outputs > The Sanction of Authority

Associated organisational unit

Electronic data

  • FAccT_Trust_Final-10

    Accepted author manuscript, 2.96 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

The Sanction of Authority: Promoting Public Trust in AI

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

The Sanction of Authority: Promoting Public Trust in AI. / Knowles, Bran; Richards, John T.
FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. New York: ACM, 2021. p. 262-271.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Knowles, B & Richards, JT 2021, The Sanction of Authority: Promoting Public Trust in AI. in FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. ACM, New York, pp. 262-271, ACM Fairness, Accountability, and Transparency, 3/03/21. https://doi.org/10.1145/3442188.3445890

APA

Knowles, B., & Richards, J. T. (2021). The Sanction of Authority: Promoting Public Trust in AI. In FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 262-271). ACM. https://doi.org/10.1145/3442188.3445890

Vancouver

Knowles B, Richards JT. The Sanction of Authority: Promoting Public Trust in AI. In FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. New York: ACM. 2021. p. 262-271 Epub 2021 Mar 1. doi: 10.1145/3442188.3445890

Author

Knowles, Bran ; Richards, John T. / The Sanction of Authority : Promoting Public Trust in AI. FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. New York : ACM, 2021. pp. 262-271

Bibtex

@inproceedings{b6727a3548094912907315757184763a,
title = "The Sanction of Authority: Promoting Public Trust in AI",
abstract = "Trusted AI literature to date has focused on the trust needs of users who knowingly interact with discrete AIs. Conspicuously absent from the literature is a rigorous treatment of public trust in AI. We argue that public distrust of AI originates from the under-development of a regulatory ecosystem that would guarantee the trustworthiness of the AIs that pervade society. Drawing from structuration theory and literature on institutional trust, we offer a model of public trust in AI that differs starkly from models driving Trusted AI efforts. We describe the pivotal role of externally auditable AI documentation within this model and the work to be done to ensure it is effective, and outline a number of actions that would promote public trust in AI. We discuss how existing efforts to develop AI documentation within organizations---both to inform potential adopters of AI components and support the deliberations of risk and ethics review boards---is necessary but insufficient assurance of the trustworthiness of AI. We argue that being accountable to the public in ways that earn their trust, through elaborating rules for AI and developing resources for enforcing these rules, is what will ultimately make AI trustworthy enough to be woven into the fabric of our society.",
author = "Bran Knowles and Richards, {John T.}",
year = "2021",
month = mar,
day = "31",
doi = "10.1145/3442188.3445890",
language = "English",
pages = "262--271",
booktitle = "FAccT '21",
publisher = "ACM",
note = "ACM Fairness, Accountability, and Transparency, ACM FAccT'21 ; Conference date: 03-03-2021 Through 10-03-2021",
url = "https://facctconference.org/",

}

RIS

TY - GEN

T1 - The Sanction of Authority

T2 - ACM Fairness, Accountability, and Transparency

AU - Knowles, Bran

AU - Richards, John T.

PY - 2021/3/31

Y1 - 2021/3/31

N2 - Trusted AI literature to date has focused on the trust needs of users who knowingly interact with discrete AIs. Conspicuously absent from the literature is a rigorous treatment of public trust in AI. We argue that public distrust of AI originates from the under-development of a regulatory ecosystem that would guarantee the trustworthiness of the AIs that pervade society. Drawing from structuration theory and literature on institutional trust, we offer a model of public trust in AI that differs starkly from models driving Trusted AI efforts. We describe the pivotal role of externally auditable AI documentation within this model and the work to be done to ensure it is effective, and outline a number of actions that would promote public trust in AI. We discuss how existing efforts to develop AI documentation within organizations---both to inform potential adopters of AI components and support the deliberations of risk and ethics review boards---is necessary but insufficient assurance of the trustworthiness of AI. We argue that being accountable to the public in ways that earn their trust, through elaborating rules for AI and developing resources for enforcing these rules, is what will ultimately make AI trustworthy enough to be woven into the fabric of our society.

AB - Trusted AI literature to date has focused on the trust needs of users who knowingly interact with discrete AIs. Conspicuously absent from the literature is a rigorous treatment of public trust in AI. We argue that public distrust of AI originates from the under-development of a regulatory ecosystem that would guarantee the trustworthiness of the AIs that pervade society. Drawing from structuration theory and literature on institutional trust, we offer a model of public trust in AI that differs starkly from models driving Trusted AI efforts. We describe the pivotal role of externally auditable AI documentation within this model and the work to be done to ensure it is effective, and outline a number of actions that would promote public trust in AI. We discuss how existing efforts to develop AI documentation within organizations---both to inform potential adopters of AI components and support the deliberations of risk and ethics review boards---is necessary but insufficient assurance of the trustworthiness of AI. We argue that being accountable to the public in ways that earn their trust, through elaborating rules for AI and developing resources for enforcing these rules, is what will ultimately make AI trustworthy enough to be woven into the fabric of our society.

U2 - 10.1145/3442188.3445890

DO - 10.1145/3442188.3445890

M3 - Conference contribution/Paper

SP - 262

EP - 271

BT - FAccT '21

PB - ACM

CY - New York

Y2 - 3 March 2021 through 10 March 2021

ER -