Home > Research > Publications & Outputs > Probing Social Bias in Labor Market Text Genera...

Electronic data

Links

View graph of relations

Probing Social Bias in Labor Market Text Generation by ChatGPT: A Masked Language Model Approach

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Probing Social Bias in Labor Market Text Generation by ChatGPT: A Masked Language Model Approach. / Ding, Lei; Hu, Yang; Denier, Nicole et al.
Advances in Neural Information Processing Systems 37 (NeurIPS 2024). ed. / A. Globerson; L. Mackey; D. Belgrave; A. Fan; U. Paquet; J. Tomczak; C. Zhang. Vol. 37 2024. (Advances in Neural Information Processing Systems).

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Ding, L, Hu, Y, Denier, N, Shi, E, Zhang, J, Hu, Q, Hughes, KD, Kong, L & Jiang, B 2024, Probing Social Bias in Labor Market Text Generation by ChatGPT: A Masked Language Model Approach. in A Globerson, L Mackey, D Belgrave, A Fan, U Paquet, J Tomczak & C Zhang (eds), Advances in Neural Information Processing Systems 37 (NeurIPS 2024). vol. 37, Advances in Neural Information Processing Systems. <https://proceedings.neurips.cc/paper_files/paper/2024/hash/fce2d8a485746f76aac7b5650db2679d-Abstract-Conference.html>

APA

Ding, L., Hu, Y., Denier, N., Shi, E., Zhang, J., Hu, Q., Hughes, K. D., Kong, L., & Jiang, B. (2024). Probing Social Bias in Labor Market Text Generation by ChatGPT: A Masked Language Model Approach. In A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, & C. Zhang (Eds.), Advances in Neural Information Processing Systems 37 (NeurIPS 2024) (Vol. 37). (Advances in Neural Information Processing Systems). https://proceedings.neurips.cc/paper_files/paper/2024/hash/fce2d8a485746f76aac7b5650db2679d-Abstract-Conference.html

Vancouver

Ding L, Hu Y, Denier N, Shi E, Zhang J, Hu Q et al. Probing Social Bias in Labor Market Text Generation by ChatGPT: A Masked Language Model Approach. In Globerson A, Mackey L, Belgrave D, Fan A, Paquet U, Tomczak J, Zhang C, editors, Advances in Neural Information Processing Systems 37 (NeurIPS 2024). Vol. 37. 2024. (Advances in Neural Information Processing Systems).

Author

Ding, Lei ; Hu, Yang ; Denier, Nicole et al. / Probing Social Bias in Labor Market Text Generation by ChatGPT : A Masked Language Model Approach. Advances in Neural Information Processing Systems 37 (NeurIPS 2024). editor / A. Globerson ; L. Mackey ; D. Belgrave ; A. Fan ; U. Paquet ; J. Tomczak ; C. Zhang. Vol. 37 2024. (Advances in Neural Information Processing Systems).

Bibtex

@inproceedings{e6a4e011ce0940c0a68484345c443f1a,
title = "Probing Social Bias in Labor Market Text Generation by ChatGPT: A Masked Language Model Approach",
abstract = "As generative large language models (LLMs) such as ChatGPT gain widespread adoption in various domains, their potential to propagate and amplify social biases, particularly in high-stakes areas such as the labor market, has become a pressing concern. AI algorithms are not only widely used in the selection of job applicants, individual job seekers may also make use of generative LLMs to help develop their job application materials. Against this backdrop, this research builds on a novel experimental design to examine social biases within ChatGPT-generated job applications in response to real job advertisements. By simulating the process of job application creation, we examine the language patterns and biases that emerge when the model is prompted with diverse job postings. Notably, we present a novel bias evaluation framework based on Masked Language Models to quantitatively assess social bias based on validated inventories of social cues/words, enabling a systematic analysis of the language used. Our findings show that the increasing adoption of generative AI, not only by employers but also increasingly by individual job seekers, can reinforce and exacerbate gender and social inequalities in the labor market through the use of biased and gendered language.",
author = "Lei Ding and Yang Hu and Nicole Denier and Enze Shi and Junxi Zhang and Qirui Hu and Hughes, {Karen D.} and Linglong Kong and Bei Jiang",
year = "2024",
month = dec,
day = "10",
language = "English",
volume = "37",
series = "Advances in Neural Information Processing Systems",
publisher = "Neural information processing systems foundation",
editor = "A. Globerson and L. Mackey and D. Belgrave and Fan, {A. } and U. Paquet and J. Tomczak and C. Zhang",
booktitle = "Advances in Neural Information Processing Systems 37 (NeurIPS 2024)",

}

RIS

TY - GEN

T1 - Probing Social Bias in Labor Market Text Generation by ChatGPT

T2 - A Masked Language Model Approach

AU - Ding, Lei

AU - Hu, Yang

AU - Denier, Nicole

AU - Shi, Enze

AU - Zhang, Junxi

AU - Hu, Qirui

AU - Hughes, Karen D.

AU - Kong, Linglong

AU - Jiang, Bei

PY - 2024/12/10

Y1 - 2024/12/10

N2 - As generative large language models (LLMs) such as ChatGPT gain widespread adoption in various domains, their potential to propagate and amplify social biases, particularly in high-stakes areas such as the labor market, has become a pressing concern. AI algorithms are not only widely used in the selection of job applicants, individual job seekers may also make use of generative LLMs to help develop their job application materials. Against this backdrop, this research builds on a novel experimental design to examine social biases within ChatGPT-generated job applications in response to real job advertisements. By simulating the process of job application creation, we examine the language patterns and biases that emerge when the model is prompted with diverse job postings. Notably, we present a novel bias evaluation framework based on Masked Language Models to quantitatively assess social bias based on validated inventories of social cues/words, enabling a systematic analysis of the language used. Our findings show that the increasing adoption of generative AI, not only by employers but also increasingly by individual job seekers, can reinforce and exacerbate gender and social inequalities in the labor market through the use of biased and gendered language.

AB - As generative large language models (LLMs) such as ChatGPT gain widespread adoption in various domains, their potential to propagate and amplify social biases, particularly in high-stakes areas such as the labor market, has become a pressing concern. AI algorithms are not only widely used in the selection of job applicants, individual job seekers may also make use of generative LLMs to help develop their job application materials. Against this backdrop, this research builds on a novel experimental design to examine social biases within ChatGPT-generated job applications in response to real job advertisements. By simulating the process of job application creation, we examine the language patterns and biases that emerge when the model is prompted with diverse job postings. Notably, we present a novel bias evaluation framework based on Masked Language Models to quantitatively assess social bias based on validated inventories of social cues/words, enabling a systematic analysis of the language used. Our findings show that the increasing adoption of generative AI, not only by employers but also increasingly by individual job seekers, can reinforce and exacerbate gender and social inequalities in the labor market through the use of biased and gendered language.

M3 - Conference contribution/Paper

VL - 37

T3 - Advances in Neural Information Processing Systems

BT - Advances in Neural Information Processing Systems 37 (NeurIPS 2024)

A2 - Globerson, A.

A2 - Mackey, L.

A2 - Belgrave, D.

A2 - Fan, A.

A2 - Paquet, U.

A2 - Tomczak, J.

A2 - Zhang, C.

ER -