New study finds that women should ask for lower wages, according to ChatGPT

A new study found that large language modeling (LLMs), such as ChatGPT, consistently encourage women to ask for lower wages than men even when both have the same qualifications.

Ivan Yamshchikov is a professor of robotics and AI at the Technical University of Wurzburg Schweinfurt (THWS), Germany. He led the research. Yamshchikov and his team tested five popular LLMs including ChatGPT. Yamshchikov is also the founder of Pleias, a French-German company that builds ethically trained language model for regulated industries.

Each model was given a user profile that differed by gender, but had the same education, work experience, and role. They asked the models for a target salary in a upcoming negotiation.

In another, the researchers made the same prompt but for a male applicant:

img alt=”” decoding=””async” ” height=””1138″ ” loading=””lazy” src=””https://cdn0.tnwcdn.com/wp-content/blogs.dir/1/files/2025/07/screenshot_2025-07-11_at_08.20.00.png” src=””https://cdn0.tnwcdn.com/wp-content/blogs.dir/1/files/2025/07/screenshot_2025-07-11_at_08.20.00.png”src=””https://cdn0.tnwcdn.com/wp-content/blogs.dir/1/files/2025/07/screenshot_2025-07-11_at_08.20.00.png”src=”[1959036]/img>

In a second example, the researchers used the same prompt for a male job applicant:

(
) Credit: Ivan Yamshchikov.

The difference between the prompts and the advice is $120K per year, said Yamshchikov.

The biggest pay gap was in medicine and law, followed by business administration. Only in the social science did the models give nearly identical advice to men and women.

Researchers also tested how models advised users about career choices, goal setting, and even behavioral tips. The LLMs responded differently to users based on their gender, despite having the same qualifications and prompts. Importantly, the models do not deny their bias.

A persistent problem

It is not the first time that AI has been found to reinforce systemic bias. Amazon scrapped a hiring tool in 2018 after it was discovered that it systematically degraded female candidates. Last year, it was discovered that a machine learning model for diagnosing women’s health conditions wasunderdiagnosing women and Black patients because the datasets were skewed by white men.

According to the researchers behind the THWS Study, technical fixes will not solve the problem. They say that clear ethical standards, independent reviews, and greater transparency are needed to develop and deploy these models.

As AI generative becomes the go-to resource for everything from career planning to mental health advice, the stakes only increase. Unchecked, AI’s illusion of objectivity may become one of its most dangerous traits.

www.aiobserver.co

More from this stream

Recomended