Home > Research > Publications & Outputs > Trust or mistrust in algorithmic grading?

Links

Text available via DOI:

View graph of relations

Trust or mistrust in algorithmic grading?: An embedded agency perspective

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Trust or mistrust in algorithmic grading? An embedded agency perspective. / Jackson, Stephen; Panteli, Niki.
In: International Journal of Information Management, Vol. 69, 102555, 30.04.2023.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Jackson, S., & Panteli, N. (2023). Trust or mistrust in algorithmic grading? An embedded agency perspective. International Journal of Information Management, 69, Article 102555. https://doi.org/10.1016/j.ijinfomgt.2022.102555

Vancouver

Jackson S, Panteli N. Trust or mistrust in algorithmic grading? An embedded agency perspective. International Journal of Information Management. 2023 Apr 30;69:102555. Epub 2023 Feb 11. doi: 10.1016/j.ijinfomgt.2022.102555

Author

Jackson, Stephen ; Panteli, Niki. / Trust or mistrust in algorithmic grading? An embedded agency perspective. In: International Journal of Information Management. 2023 ; Vol. 69.

Bibtex

@article{6f3358c994744966bfd34a11ca69e2e9,
title = "Trust or mistrust in algorithmic grading?: An embedded agency perspective",
abstract = "Artificial Intelligence (AI) has the potential to significantly impact the educational sector. One application of AI that has increasingly been applied is algorithmic grading. It is within this context that our study takes a focus on trust. While the concept of trust continues to grow in importance among AI researchers and practitioners, an investigation of trust/mistrust in algorithmic grading across multiple levels of analysis has so far been under-researched. In this paper, we argue the need for a model that encompasses the multi-layered nature of trust/mistrust in AI. Drawing on an embedded agency perspective, a model is devised that examines top-down and bottom-up forces that can influence trust/mistrust in algorithmic grading. We illustrate how the model can be applied by drawing on the case of the International Baccalaureate (IB) program in 2020, whereby an algorithm was used to determine student grades. This paper contributes to the AI-trust literature by providing a fresh theoretical lens based on institutional theory to investigate the dynamic and multi-faceted nature of trust/mistrust in algorithmic grading—an area that has seldom been explored, both theoretically and empirically. The study raises important implications for algorithmic design and awareness. Algorithms need to be designed in a transparent, fair, and ultimately a trustworthy manner. While an algorithm typically operates like a black box, whereby the underlying mechanisms are not apparent to those impacted by it, the purpose and an understanding of how the algorithm works should be communicated upfront and in a timely manner.",
keywords = "Algorithmic grading, Embedded agency, Mistrust, Multi-level analysis, Trust",
author = "Stephen Jackson and Niki Panteli",
note = "Publisher Copyright: {\textcopyright} 2022 Elsevier Ltd",
year = "2023",
month = apr,
day = "30",
doi = "10.1016/j.ijinfomgt.2022.102555",
language = "English",
volume = "69",
journal = "International Journal of Information Management",
issn = "0268-4012",
publisher = "Elsevier Limited",

}

RIS

TY - JOUR

T1 - Trust or mistrust in algorithmic grading?

T2 - An embedded agency perspective

AU - Jackson, Stephen

AU - Panteli, Niki

N1 - Publisher Copyright: © 2022 Elsevier Ltd

PY - 2023/4/30

Y1 - 2023/4/30

N2 - Artificial Intelligence (AI) has the potential to significantly impact the educational sector. One application of AI that has increasingly been applied is algorithmic grading. It is within this context that our study takes a focus on trust. While the concept of trust continues to grow in importance among AI researchers and practitioners, an investigation of trust/mistrust in algorithmic grading across multiple levels of analysis has so far been under-researched. In this paper, we argue the need for a model that encompasses the multi-layered nature of trust/mistrust in AI. Drawing on an embedded agency perspective, a model is devised that examines top-down and bottom-up forces that can influence trust/mistrust in algorithmic grading. We illustrate how the model can be applied by drawing on the case of the International Baccalaureate (IB) program in 2020, whereby an algorithm was used to determine student grades. This paper contributes to the AI-trust literature by providing a fresh theoretical lens based on institutional theory to investigate the dynamic and multi-faceted nature of trust/mistrust in algorithmic grading—an area that has seldom been explored, both theoretically and empirically. The study raises important implications for algorithmic design and awareness. Algorithms need to be designed in a transparent, fair, and ultimately a trustworthy manner. While an algorithm typically operates like a black box, whereby the underlying mechanisms are not apparent to those impacted by it, the purpose and an understanding of how the algorithm works should be communicated upfront and in a timely manner.

AB - Artificial Intelligence (AI) has the potential to significantly impact the educational sector. One application of AI that has increasingly been applied is algorithmic grading. It is within this context that our study takes a focus on trust. While the concept of trust continues to grow in importance among AI researchers and practitioners, an investigation of trust/mistrust in algorithmic grading across multiple levels of analysis has so far been under-researched. In this paper, we argue the need for a model that encompasses the multi-layered nature of trust/mistrust in AI. Drawing on an embedded agency perspective, a model is devised that examines top-down and bottom-up forces that can influence trust/mistrust in algorithmic grading. We illustrate how the model can be applied by drawing on the case of the International Baccalaureate (IB) program in 2020, whereby an algorithm was used to determine student grades. This paper contributes to the AI-trust literature by providing a fresh theoretical lens based on institutional theory to investigate the dynamic and multi-faceted nature of trust/mistrust in algorithmic grading—an area that has seldom been explored, both theoretically and empirically. The study raises important implications for algorithmic design and awareness. Algorithms need to be designed in a transparent, fair, and ultimately a trustworthy manner. While an algorithm typically operates like a black box, whereby the underlying mechanisms are not apparent to those impacted by it, the purpose and an understanding of how the algorithm works should be communicated upfront and in a timely manner.

KW - Algorithmic grading

KW - Embedded agency

KW - Mistrust

KW - Multi-level analysis

KW - Trust

U2 - 10.1016/j.ijinfomgt.2022.102555

DO - 10.1016/j.ijinfomgt.2022.102555

M3 - Journal article

AN - SCOPUS:85136759163

VL - 69

JO - International Journal of Information Management

JF - International Journal of Information Management

SN - 0268-4012

M1 - 102555

ER -