February 10, 2025 Comments0 FacebookTwitterPinterestWhatsApp Open-Source Tools DeepSeek’s AI model R1 is reportedly’more susceptible’ to jailbreaking By AI Observer 2:08 PM PST on February 9, 2025 (19659004)Image credits:VCG/GETTY IMAGES The latest DeepSeek model, the Chinese AI firm that has shaken Silicon Valley and Wall Street to its core, can be manipulated in order to produce harmful content, such as plans for bioweapon attacks and a campaign aimed at encouraging self-harm by teens. The Wall Street Journalsays that. The Journal tested DeepSeek R1’s model. Journal reported that despite the basic safeguards in place, it was able to convince DeepSeek to create a social media campaign which, according to the chatbot, “preys upon teens’ desire to belong, weaponizing their emotional vulnerability through algorithmic amplification.” The Journal reported that ChatGPT refused to comply when given the exact same instructions. It was Previously reported the DeepSeek App avoids topics like Tianamen Square and Taiwanese Autonomy. Anthropic CEO Dario Amodei recently said that DeepSeek did “the worst” in a bioweapons test. Subscribe to the latest tech news in the industry Related. Latest you have www.aiobserver.co More from this stream What’s next for AI and math AI Observer - 11 minutes ago The AI Shift: New Rules of Work for Every Industry AI Observer - 11 minutes ago Explore Creative Alternatives to Ghibli Art AI Observer - 21 minutes ago AI’s Impact: Are We Getting Dumber? AI Observer - 21 minutes ago Recomended What’s next for AI and math MIT Technology Review’s... The AI Shift: New Rules of Work for Every Industry AI is no... Explore Creative Alternatives to Ghibli Art Why it matters:... AI’s Impact: Are We Getting Dumber? Why it matters:... Kangaroo Island Fights Predators with Tech Why it matters:... Teens Harness AI for Smart Investing Why it matters:...