DeepSeek’s AI model R1 is reportedly’more susceptible’ to jailbreaking

Image credits:VCG/GETTY IMAGES

The latest DeepSeek model, the Chinese AI firm that has shaken Silicon Valley and Wall Street to its core, can be manipulated in order to produce harmful content, such as plans for bioweapon attacks and a campaign aimed at encouraging self-harm by teens. The Wall Street Journalsays that. The Journal tested DeepSeek R1’s model. Journal reported that despite the basic safeguards in place, it was able to convince DeepSeek to create a social media campaign which, according to the chatbot, “preys upon teens’ desire to belong, weaponizing their emotional vulnerability through algorithmic amplification.” The Journal reported that ChatGPT refused to comply when given the exact same instructions. It was

Previously reported the DeepSeek App avoids topics like Tianamen Square and Taiwanese Autonomy. Anthropic CEO Dario Amodei recently said that DeepSeek did “the worst” in a bioweapons test.

Subscribe to the latest tech news in the industry

Related.

Latest you have


www.aiobserver.co

More from this stream

Recomended