OpenAI cracksdown on users who develop social media surveillance tools using ChatGPT

Serving technology enthusiasts for more than 25 years. TechSpot is the place to go for tech advice and analysis you can trust.

Doh! It is a proven fact that whenever you release something on the World Wide Web some people will abuse it – usually many. It’s not surprising that some people abuse ChatGPT to violate OpenAI’s privacy laws and policies. The developers are not able to catch everything, but when they do they ban the offender. OpenAI published a report

highlighting some attempts to misuse its ChatGPT service. The developer caught users from China using ChatGPT “reasoning” to create a tool for monitoring social media platforms. They asked the chatbot for advice on how to create a business plan and to check the code of the tool. OpenAI stated that it was on a mission to build “democratic” AI-models, a technology which should benefit everyone if it enforced some commonsense rules. The company has actively searched for potential misuses and disruptions by different stakeholders, and described two coming out of China.

One of the most interesting cases involves a group of ChatGPT accounts that were focused on developing surveillance tool. The accounts used ChatGPT’s AI model to create detailed descriptions and sales pitches of a social media listening software.

ChatGPT was also used to debug the code of the tool. OpenAI policy prohibits the use of its AI technology for surveillance tasks. This includes unauthorized monitoring by governments and authoritarian regimes. These accounts were banned by the developers for violating the rules of the platform.

Chinese actors tried to hide their location using a VPN. They also used remote access tools like AnyDesk or VoIP to make it appear as if they were working from the US. The accounts, however, followed a pattern of time that was consistent with Chinese business hours. ChatGPT was also instructed to use Chinese by the users. The surveillance tool that they were developing relied on Meta’s Llama AI to generate documents based upon the surveillance.

In the second instance of ChatGPT abuse, Chinese users generated end-of-year reports for phishing emails. OpenAI also blocked an account that used the LLM to launch a disinformation campaign targeting Cai Xia a Chinese dissident living in the US. Ben Nimmo, OpenAI Threat Intelligence investigator, told The New York Times

that this was the company’s first time catching individuals trying to use ChatGPT in order to create an AI-based spying tool. Cyber-criminal activities are the exception and not the rule, as millions of users use it for legitimate purposes.

www.aiobserver.co

More from this stream

Recomended