Home > Research > Publications & Outputs > ACM Techbrief: Trusted AI

Links

Text available via DOI:

View graph of relations

ACM Techbrief: Trusted AI

Research output: Book/Report/ProceedingsOther report

Published
Close
Publication date17/01/2024
Place of PublicationNew York
PublisherACM
Number of pages4
ISBN (electronic)9798400709548
<mark>Original language</mark>English

Abstract

It is important that AI being used in workplaces and everyday life not only be engineered to be trustworthy but actually be trusted. In recent years there has been significant investment in devising technical mechanisms to promote AI trustworthiness, such as documentation schemes to enhance transparency and algorithms to explain automated decisions. Many metrics have also been proposed to quantify important aspects of AI models, like fairness and robustness against adversarial attacks. While there has been some research on how these mechanisms and metrics influence trust in limited experimental contexts, that data has by and large not informed emerging regulations and standards that might contribute to increased public trust of AI. Beyond the complexity of this translation,5 trustworthiness only produces trust when humans perceive AI to be trustworthy from their particular perspectives. Technical implementation and communication of trustworthiness are essential steps toward trusted AI, but to ensure that trustworthiness meaningfully relates to what people believe makes AI worthy of trust, this work must be coupled with a thorough examination of the underlying causes of trust and distrust by different stakeholder groups in particular contexts.