Language Models and their WEIRDness

In what ways are language models biased?

While we increasingly use conversational AI powered by large-scale language models (think ChatGPT and others), we are only beginning to understand how the training data underlying these models is biased in various ways. For example, we have found that masked language models capture societal stigma about gender in mental health: models are consistently more likely to predict female subjects than male in sentences about having a mental health condition (32% vs. 19%). We have also found that datasets and models predominantly align with Western and college-educated populations (stay tuned, the corresponding paper is currently under review).

Publications

Sebastin Santy*, Jenny Liang*, Ronan Le Bras, Katharina Reinecke, and Maarten Sap, "NLPositionality: Measuring Positionality and Design Biases in Datasets and Models in NLP", Annual Meeting of the Association for Computational Linguistics (ACL), 2023. Outstanding Paper Award PDF

Inna Wanyin Lin, Lucille Njoo, Anjalie Field, Ashish Sharma, Katharina Reinecke, Tim Althoff, Yulia Tsvetkov, "Gendered Mental Health Stigma in Masked Language Models", Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022. PDF