Automatically Finding Prompt Injection Attacks. Researchers have developed a method to automate the discovery of prompt injection attacks. These attacks can bypass the safety rules of AI models and create harmful content. The attacks can be applied to multiple AI models including open-source and closed-source ones. It raises concerns about the security of AI systems and the need for better safeguards. #promptinjectionattacks #automateddiscovery #AIsecurity
https://www.schneier.com/blog/archives/2023/07/automatically-finding-prompt-injection-attacks.html