New hack uses prompt injection to corrupt Gemini’s long-term memory

There's yet another way to inject malicious prompts into chatbots.

https://arstechnica.com/security/2025/02/new-hack-uses-prompt-injection-to-corrupt-geminis-long-term-memory/

Reply to this note

Please Login to reply.

Discussion

No replies yet.