Toggle Main Menu Toggle Search

Open Access padlockePrints

Prejudiced interactions with large language models (LLMs) reduce trustworthiness and behavioral intentions among members of stigmatized groups

Lookup NU author(s): Dr Zachary PetzelORCiD, Leanne Sowerby

Downloads


Licence

This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).


Abstract

Users report prejudiced responses generated by large language models (LLMs) like ChatGPT. Across 3 preregistered experiments, members of stigmatized social groups (Black Americans, women) reported higher trustworthiness of LLMs after viewing unbiased interactions with ChatGPT compared to when viewing AI-generated prejudice (i.e., racial or gender disparities in salary). Notably, higher trustworthiness accounted for increased behavioral intentions to use LLMs, but only among stigmatized social groups. Conversely, White Americans were more likely to use LLMs when AI-generated prejudice confirmed implicit racial biases, while men intended to use LLMs when responses matched implicit gender biases. Results suggest reducing AI-generated prejudice may promote trustworthiness of LLMs among members of stigmatized social groups, increasing their intentions to use AI tools. Importantly, addressing AI-generated prejudice could minimize social disparities in adoption of LLMs which might further exacerbate professional and educational disparities. Given expected integration of AI in professional and educational settings, these findings may guide equitable implementation strategies among employees and students, in addition to extending theoretical models of technology acceptance by suggesting additional mechanisms of behavioral intentions to use emerging technologies (e.g., trustworthiness).


Publication metadata

Author(s): Petzel ZW, Sowerby L

Publication type: Article

Publication status: Published

Journal: Computers in Human Behavior

Year: 2025

Volume: 165

Print publication date: 01/04/2025

Online publication date: 15/01/2025

Acceptance date: 10/01/2025

Date deposited: 22/01/2025

ISSN (print): 0747-5632

ISSN (electronic): 1873-7692

Publisher: Elsevier Ltd

URL: https://doi.org/10.1016/j.chb.2025.108563

DOI: 10.1016/j.chb.2025.108563

Data Access Statement: Data and materials used in the paper are freely available via OSF, with links provided in the text.


Altmetrics

Altmetrics provided by Altmetric


Share