Home > Research > Publications & Outputs > The process of gaining an AI Legibility mark

Electronic data

  • The process of gaining an AI Legibility Mark

    Rights statement: © ACM, 2020. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in CHI'20 Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems Extended Abstracts http://doi.acm.org/10.1145/3334480.3381820

    Accepted author manuscript, 412 KB, PDF document

    Available under license: CC BY-NC-ND: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Links

Text available via DOI:

View graph of relations

The process of gaining an AI Legibility mark

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

The process of gaining an AI Legibility mark. / Pilling, Franziska; Akmal, Haider Ali; Coulton, Paul et al.
CHI '20: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems Extended Abstracts. New York: ACM, 2020. p. 1-9.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Pilling, F, Akmal, HA, Coulton, P & Lindley, J 2020, The process of gaining an AI Legibility mark. in CHI '20: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems Extended Abstracts. ACM, New York, pp. 1-9, CHI 2020, 25/04/20. https://doi.org/10.1145/3334480.3381820

APA

Pilling, F., Akmal, H. A., Coulton, P., & Lindley, J. (2020). The process of gaining an AI Legibility mark. In CHI '20: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems Extended Abstracts (pp. 1-9). ACM. https://doi.org/10.1145/3334480.3381820

Vancouver

Pilling F, Akmal HA, Coulton P, Lindley J. The process of gaining an AI Legibility mark. In CHI '20: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems Extended Abstracts. New York: ACM. 2020. p. 1-9 doi: 10.1145/3334480.3381820

Author

Pilling, Franziska ; Akmal, Haider Ali ; Coulton, Paul et al. / The process of gaining an AI Legibility mark. CHI '20: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems Extended Abstracts. New York : ACM, 2020. pp. 1-9

Bibtex

@inproceedings{9abebdaeac8c4b3b94d8e505b87b157f,
title = "The process of gaining an AI Legibility mark",
abstract = "Researchers and designers working in industrial sectors seeking to incorporate Artificial Intelligence (AI) technology, will be aware of the emerging International Organisation for AI Legibility (IOAIL). IOAIL was established to overcome the eruption of obscure AI technology. One of the primary goals of IOAIL is the development of a proficient certification body providing knowledge to users regarding the AI technology they are being exposed to. To this end IOAIL produced a system of standardised icons for attaching to products and systems to indicate both the presence of AI and to increase the legibility of that AI{\textquoteright}s attributes. Whilst the process of certification is voluntary it is becoming a mark of trust, enhancing the usability and acceptability of AI-infused products through improved legibility. In this paper we present our experience of seeking certification for a locally implemented AI security system, highlighting the issues generated for those seeking to adopt such certification.",
keywords = "Artificial Intelligence, Legibility, Design Fiction",
author = "Franziska Pilling and Akmal, {Haider Ali} and Paul Coulton and Joseph Lindley",
note = "{\textcopyright} ACM, 2020. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in CHI'20 Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems Extended Abstracts http://doi.acm.org/10.1145/3334480.3381820; CHI 2020 ; Conference date: 25-04-2020 Through 30-04-2020",
year = "2020",
month = apr,
day = "25",
doi = "10.1145/3334480.3381820",
language = "English",
isbn = "9781450368193",
pages = "1--9",
booktitle = "CHI '20",
publisher = "ACM",
url = "https://chi2020.acm.org/",

}

RIS

TY - GEN

T1 - The process of gaining an AI Legibility mark

AU - Pilling, Franziska

AU - Akmal, Haider Ali

AU - Coulton, Paul

AU - Lindley, Joseph

N1 - © ACM, 2020. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in CHI'20 Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems Extended Abstracts http://doi.acm.org/10.1145/3334480.3381820

PY - 2020/4/25

Y1 - 2020/4/25

N2 - Researchers and designers working in industrial sectors seeking to incorporate Artificial Intelligence (AI) technology, will be aware of the emerging International Organisation for AI Legibility (IOAIL). IOAIL was established to overcome the eruption of obscure AI technology. One of the primary goals of IOAIL is the development of a proficient certification body providing knowledge to users regarding the AI technology they are being exposed to. To this end IOAIL produced a system of standardised icons for attaching to products and systems to indicate both the presence of AI and to increase the legibility of that AI’s attributes. Whilst the process of certification is voluntary it is becoming a mark of trust, enhancing the usability and acceptability of AI-infused products through improved legibility. In this paper we present our experience of seeking certification for a locally implemented AI security system, highlighting the issues generated for those seeking to adopt such certification.

AB - Researchers and designers working in industrial sectors seeking to incorporate Artificial Intelligence (AI) technology, will be aware of the emerging International Organisation for AI Legibility (IOAIL). IOAIL was established to overcome the eruption of obscure AI technology. One of the primary goals of IOAIL is the development of a proficient certification body providing knowledge to users regarding the AI technology they are being exposed to. To this end IOAIL produced a system of standardised icons for attaching to products and systems to indicate both the presence of AI and to increase the legibility of that AI’s attributes. Whilst the process of certification is voluntary it is becoming a mark of trust, enhancing the usability and acceptability of AI-infused products through improved legibility. In this paper we present our experience of seeking certification for a locally implemented AI security system, highlighting the issues generated for those seeking to adopt such certification.

KW - Artificial Intelligence

KW - Legibility

KW - Design Fiction

U2 - 10.1145/3334480.3381820

DO - 10.1145/3334480.3381820

M3 - Conference contribution/Paper

SN - 9781450368193

SP - 1

EP - 9

BT - CHI '20

PB - ACM

CY - New York

T2 - CHI 2020

Y2 - 25 April 2020 through 30 April 2020

ER -