OpenAI banned the accounts of Chinese users that attempted to use ChatGPT for debugging and editing code of an AI social media monitoring tool, according to the companystated Friday. OpenAI’s Peer Review campaign saw the group ask ChatGPT to create sales pitches for a software program that was supposed to monitor anti Chinese sentiment on X and other platforms. The operation seems to have been interested in spotting protests against human-rights violations in China with the intention of sharing these insights with the country’s authorities. OpenAI said
“This network consisted of ChatGPT accounts that operated in a time pattern consistent with mainland Chinese business hours, prompted our models in Chinese, and used our tools with a volume and variety consistent with manual prompting, rather than automation,” . Ben Nimmo is a principal researcher at OpenAI. According to him, this was the company’s first discovery of an AI tool. Nimmotold The New York Times “Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of the way they use our AI models,” .
The code for the surveillance software appears to be based on a version open-source of one of Meta’s Llamamodels. The group appears to have also used ChatGPT for a performance review at the end of the year where it claimed to have written phishing email on behalf clients in China. OpenAI stated that the operation was using ChatGPT to edit the code for the AI surveillance tool.
OpenAI also said that it had recently banned a ChatGPT account which was using the tool to create social media posts criticalCai Xia, a Chinese dissident and political scientist who lives in exile in the US. The same group used the chatbot also to generate articles critical of the US in Spanish. These articles were published in Latin America by “mainstream” and were often attributed to an individual or Chinese company.