DeepSeek’s Safety Guardrails Failed Every Test Researchers Threw at Its AI Chatbot

Security researchers tested 50 well-known jailbreaks against DeepSeek’s popular new AI chatbot. It didn’t stop a single one.

https://www.wired.com/story/deepseeks-ai-jailbreak-prompt-injection-attacks/

Reply to this note

Please Login to reply.

Discussion

No replies yet.