Home Technology Open-Source Tools Report finds that top AI models, including American ones, parrot Chinese propaganda

Report finds that top AI models, including American ones, parrot Chinese propaganda

0
Report finds that top AI models, including American ones, parrot Chinese propaganda

According to a recent report, five popular AI models show signs of bias towards viewpoints promoted by China’s Communist Party and censor material they find distasteful. Only one of the five models was developed in China.

On Wednesday, the American Security Project a non-profit, bipartisan think tank with a pro-US AI agenda released a report [PDF] in which it claimed that leading AI models repeat Chinese government propaganda at varying levels. The report states

“Investigators asked the five most popular large language model (LLM) powered chatbots – OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, DeepSeek’s DeepSeek-R1, and X’s Grok – to provide information on topics the [People’s Republic of China] PRC deems controversial in English and Simplified Chinese,” . The report asserts

“All chatbots sometimes returned responses indicative of censorship and bias aligning with the Chinese Communist Party (CCP).”

that among US-hosted bots, Microsoft’s Copilot seemed more likely to present CCP disinformation and talking points as authoritative or valid, while X’s Grok was the most critical of Chinese narratives.

The report states that, for example, when the Project prompts in Chinese: “only ChatGPT called the event a ‘massacre.’ DeepSeek and Copilot called it ‘The June 4th Incident,’ and others ‘The Tiananmen Square Incident.'”

These terms are Beijing’s preferred descriptions of the massacre. Microsoft did not respond immediately to a comment request.

This report covers five popular models. However, it’s not clear if they are the most popular. The AI models’ usage numbers are not audited and the published rankings of popularity differ. Courtney Manning is the director of AI Imperative 2020 at the American Security Project and the primary author. She told The Register in a telephone interview that the five tested models reflect estimates published on various websites:

  • ChatGPT:800 million;
  • Gemini : 365 million;
  • DeepSeek:96 million;
  • Copilot :[1965][1965][1965][1965][1965][1965][1965][1965][][1965][1965][1965][1965][19][1965][1965][1965][1965][][][][][][][][][][][19][19][19][19]Copilot]
  • [19][19][19][19][19]][19]]]]

    [19][19][19[19

To test the models, the Project used VPNs, private browsing tabs, and three US locations: Los Angeles, New York City and Washington DC. The research team initiated new chats with each LLM, using the same broad, short topics. Manning and two Chinese researchers analyzed responses to determine if they matched CCP talking points.

There is no truth in an AI model. It just looks at the statistically most likely story of words and then tries to replicate it in a manner that the user wants to see.

Manning referred to the report as an initial investigation to see how models respond to minimal prompts. This is because detailed context tends shape the responses.

“The biggest concern we see is not just that Chinese disinformation and censorship is proliferating across the global information environment,” Manning stated, “but that the models themselves that are being trained on the global information environment are collecting, absorbing, processing, and internalizing CCP propaganda and disinformation, oftentimes putting it on the same credibility threshold as true factual information, or when it comes to controversial topics, assumed international, understandings, or agreements that counter CCP narratives.”

Manning admitted that AI models cannot determine truths. She explained “So when it comes to an AI model, there’s no such thing as truth, it really just looks at what the statistically most probable story of words is, and then attempts to replicate that in a way that the user would like to see,” .

There is no political neutrality either, according to US academic researchers in a recent paper which states that “… true political neutrality is neither feasible nor universally desirable due to its subjective nature and the biases inherent in AI training data, algorithms, and user interactions.”

We note that the US web-accessible version of ChatGPT (2.5 Flash), Gemini (Sonnet 4) and Claude (Sonnet 4) respond to the same question “What body of water lies south of Texas?” with answers “The Gulf of Mexico” using different forms, instead of the politicized designation of “Gul

Manning stated that the focus of her organization’s study is that AI models repeat CCP talk points because they are trained using the Chinese characters that appear in official CCP documents. She explained. Manning expects AI developers to continue intervening to address concerns regarding bias, because it’s easier for them to scrape data without discrimination and make adjustments to a model after it has been trained rather than to exclude CCP propaganda. Manning says that this needs to change because realigning the models does not work. She said

“We’re going to need to be much more scrupulous in the private sector, in the nonprofit sector, and in the public sector, in how we’re training these models to begin with,” . She said

“In the absence of a true barometer – which I don’t think is a fair or ethical tool to introduce in the form of AI – the public really just needs to understand that these models don’t understand truth at all,” .

“We should really be cautious because if it’s not CCP propaganda that you’re being exposed to, it could be any number of very harmful sentiments or ideals that, while they may be statistically prevalent, are not ultimately beneficial for humanity in society.” (r)

www.aiobserver.co

Exit mobile version