Home > Research > Publications & Outputs > Playgrounds and Prejudices

Electronic data

  • Playgrounds_and_Prejudice__OA_ (1)

    Accepted author manuscript, 1.8 MB, PDF document

Links

Text available via DOI:

View graph of relations

Playgrounds and Prejudices: Exploring Biases in Generative AI For Children

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNAbstract

Published

Standard

Playgrounds and Prejudices: Exploring Biases in Generative AI For Children. / Baines, Alexander; Gruia, Lidia; Collyer-Hoar, Gail et al.
IDC '24: Proceedings of the 23rd Annual ACM Interaction Design and Children Conference. New York: ACM, 2024. p. 839-843.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNAbstract

Harvard

Baines, A, Gruia, L, Collyer-Hoar, G & Rubegni, E 2024, Playgrounds and Prejudices: Exploring Biases in Generative AI For Children. in IDC '24: Proceedings of the 23rd Annual ACM Interaction Design and Children Conference. ACM, New York, pp. 839-843. https://doi.org/10.1145/3628516.3659404

APA

Baines, A., Gruia, L., Collyer-Hoar, G., & Rubegni, E. (2024). Playgrounds and Prejudices: Exploring Biases in Generative AI For Children. In IDC '24: Proceedings of the 23rd Annual ACM Interaction Design and Children Conference (pp. 839-843). ACM. https://doi.org/10.1145/3628516.3659404

Vancouver

Baines A, Gruia L, Collyer-Hoar G, Rubegni E. Playgrounds and Prejudices: Exploring Biases in Generative AI For Children. In IDC '24: Proceedings of the 23rd Annual ACM Interaction Design and Children Conference. New York: ACM. 2024. p. 839-843 doi: 10.1145/3628516.3659404

Author

Baines, Alexander ; Gruia, Lidia ; Collyer-Hoar, Gail et al. / Playgrounds and Prejudices : Exploring Biases in Generative AI For Children. IDC '24: Proceedings of the 23rd Annual ACM Interaction Design and Children Conference. New York : ACM, 2024. pp. 839-843

Bibtex

@inbook{b7082fd87ef248f28fb5decdd31251cb,
title = "Playgrounds and Prejudices: Exploring Biases in Generative AI For Children",
abstract = "The influence of generative Artificial Intelligence (AI) on the propagation and amplification of societal biases, particularly in the context of children{\textquoteright}s content creation, is a growing concern. By developing and testing a prototype tool designed to assist children in Digital Storytelling (DST), our research aimed to explore and mitigate the propagation of stereotypes through the use of a character-generating AI tool utilising Stable Diffusion. Despite initial aspirations, the tool demonstrated significant biases inherent in the underlying AI model, leading to the decision against its use by children. The findings we discovered contribute to a broader discourse on the development of ethical AI and its use, advocating for a more responsible and inclusive approach to technological innovation in the context of children{\textquoteright}s digital media consumption and creation.",
keywords = "Artificial Intelligence, Generative AI, Child-Computer Interaction, Digital Story Telling, Ethics, Bias",
author = "Alexander Baines and Lidia Gruia and Gail Collyer-Hoar and Elisa Rubegni",
year = "2024",
month = jun,
day = "17",
doi = "10.1145/3628516.3659404",
language = "English",
pages = "839--843",
booktitle = "IDC '24: Proceedings of the 23rd Annual ACM Interaction Design and Children Conference",
publisher = "ACM",

}

RIS

TY - CHAP

T1 - Playgrounds and Prejudices

T2 - Exploring Biases in Generative AI For Children

AU - Baines, Alexander

AU - Gruia, Lidia

AU - Collyer-Hoar, Gail

AU - Rubegni, Elisa

PY - 2024/6/17

Y1 - 2024/6/17

N2 - The influence of generative Artificial Intelligence (AI) on the propagation and amplification of societal biases, particularly in the context of children’s content creation, is a growing concern. By developing and testing a prototype tool designed to assist children in Digital Storytelling (DST), our research aimed to explore and mitigate the propagation of stereotypes through the use of a character-generating AI tool utilising Stable Diffusion. Despite initial aspirations, the tool demonstrated significant biases inherent in the underlying AI model, leading to the decision against its use by children. The findings we discovered contribute to a broader discourse on the development of ethical AI and its use, advocating for a more responsible and inclusive approach to technological innovation in the context of children’s digital media consumption and creation.

AB - The influence of generative Artificial Intelligence (AI) on the propagation and amplification of societal biases, particularly in the context of children’s content creation, is a growing concern. By developing and testing a prototype tool designed to assist children in Digital Storytelling (DST), our research aimed to explore and mitigate the propagation of stereotypes through the use of a character-generating AI tool utilising Stable Diffusion. Despite initial aspirations, the tool demonstrated significant biases inherent in the underlying AI model, leading to the decision against its use by children. The findings we discovered contribute to a broader discourse on the development of ethical AI and its use, advocating for a more responsible and inclusive approach to technological innovation in the context of children’s digital media consumption and creation.

KW - Artificial Intelligence, Generative AI, Child-Computer Interaction, Digital Story Telling, Ethics, Bias

U2 - 10.1145/3628516.3659404

DO - 10.1145/3628516.3659404

M3 - Abstract

SP - 839

EP - 843

BT - IDC '24: Proceedings of the 23rd Annual ACM Interaction Design and Children Conference

PB - ACM

CY - New York

ER -