Large language models (LLMs) such as ChatGPT appear to be biased toward WEIRD (Western, Educated, Industrialized, Rich, Democratic) populations. Research has shown that LLMs often behave WEIRDly, even when prompted in non-English languages (see paper attached). They also tend to assume that users themselves are WEIRD. These biases raise important ethical questions, particularly when LLMs are used in research — for instance, in studies focusing on non-WEIRD languages or for tasks such as data annotation. In this brown bag lunch, we will explore the ethical dimensions of using LLMs trained primarily on WEIRD data for research purposes.
TTF Ethics brown bag lunch discussions – WEIRDly Biased LLMs
Large language models (LLMs) such as ChatGPT appear to be biased toward WEIRD (Western, Educated, Industrialized, Rich, Democratic) populations. Research has shown that LLMs often behave WEIRDly, even when prompted in non-English languages (see paper attached). They also tend to assume that users themselves are WEIRD. These biases raise important ethical questions, particularly when LLMs are used in research — for instance, in studies focusing on non-WEIRD languages or for tasks such as data annotation. In this brown bag lunch, we will explore the ethical dimensions of using LLMs trained primarily on WEIRD data for research purposes.
Zoom link has been sent via email!