Home > Research > Publications & Outputs > ACM Techbrief: Trusted AI

Associated organisational unit

Links

Text available via DOI:

View graph of relations

ACM Techbrief: Trusted AI

Research output: Book/Report/ProceedingsOther report

Published

Standard

ACM Techbrief: Trusted AI. / Knowles, Bran; Richards, John T.
New York: ACM, 2024. 4 p.

Research output: Book/Report/ProceedingsOther report

Harvard

APA

Vancouver

Knowles B, Richards JT. ACM Techbrief: Trusted AI. New York: ACM, 2024. 4 p. doi: 10.1145/3641524

Author

Knowles, Bran ; Richards, John T. / ACM Techbrief: Trusted AI. New York : ACM, 2024. 4 p.

Bibtex

@book{84caabe2d1cc4e48b9cc122568295509,
title = "ACM Techbrief: Trusted AI",
abstract = "It is important that AI being used in workplaces and everyday life not only be engineered to be trustworthy but actually be trusted. In recent years there has been significant investment in devising technical mechanisms to promote AI trustworthiness, such as documentation schemes to enhance transparency and algorithms to explain automated decisions. Many metrics have also been proposed to quantify important aspects of AI models, like fairness and robustness against adversarial attacks. While there has been some research on how these mechanisms and metrics influence trust in limited experimental contexts, that data has by and large not informed emerging regulations and standards that might contribute to increased public trust of AI. Beyond the complexity of this translation,5 trustworthiness only produces trust when humans perceive AI to be trustworthy from their particular perspectives. Technical implementation and communication of trustworthiness are essential steps toward trusted AI, but to ensure that trustworthiness meaningfully relates to what people believe makes AI worthy of trust, this work must be coupled with a thorough examination of the underlying causes of trust and distrust by different stakeholder groups in particular contexts.",
author = "Bran Knowles and Richards, {John T.}",
year = "2024",
month = jan,
day = "17",
doi = "10.1145/3641524",
language = "English",
publisher = "ACM",

}

RIS

TY - BOOK

T1 - ACM Techbrief: Trusted AI

AU - Knowles, Bran

AU - Richards, John T.

PY - 2024/1/17

Y1 - 2024/1/17

N2 - It is important that AI being used in workplaces and everyday life not only be engineered to be trustworthy but actually be trusted. In recent years there has been significant investment in devising technical mechanisms to promote AI trustworthiness, such as documentation schemes to enhance transparency and algorithms to explain automated decisions. Many metrics have also been proposed to quantify important aspects of AI models, like fairness and robustness against adversarial attacks. While there has been some research on how these mechanisms and metrics influence trust in limited experimental contexts, that data has by and large not informed emerging regulations and standards that might contribute to increased public trust of AI. Beyond the complexity of this translation,5 trustworthiness only produces trust when humans perceive AI to be trustworthy from their particular perspectives. Technical implementation and communication of trustworthiness are essential steps toward trusted AI, but to ensure that trustworthiness meaningfully relates to what people believe makes AI worthy of trust, this work must be coupled with a thorough examination of the underlying causes of trust and distrust by different stakeholder groups in particular contexts.

AB - It is important that AI being used in workplaces and everyday life not only be engineered to be trustworthy but actually be trusted. In recent years there has been significant investment in devising technical mechanisms to promote AI trustworthiness, such as documentation schemes to enhance transparency and algorithms to explain automated decisions. Many metrics have also been proposed to quantify important aspects of AI models, like fairness and robustness against adversarial attacks. While there has been some research on how these mechanisms and metrics influence trust in limited experimental contexts, that data has by and large not informed emerging regulations and standards that might contribute to increased public trust of AI. Beyond the complexity of this translation,5 trustworthiness only produces trust when humans perceive AI to be trustworthy from their particular perspectives. Technical implementation and communication of trustworthiness are essential steps toward trusted AI, but to ensure that trustworthiness meaningfully relates to what people believe makes AI worthy of trust, this work must be coupled with a thorough examination of the underlying causes of trust and distrust by different stakeholder groups in particular contexts.

U2 - 10.1145/3641524

DO - 10.1145/3641524

M3 - Other report

BT - ACM Techbrief: Trusted AI

PB - ACM

CY - New York

ER -