Anthropic, a leading AI model developer, has expanded its bug bounty program to address universal jailbreak vulnerabilities. The initiative aims to identify and mitigate flaws in the safety mitigations designed to prevent misuse of their models. With rewards up to $15,000, experienced AI security researchers are encouraged to apply for an invitation by August 16. This effort aligns with commitments signed by Anthropic for responsible AI development.
Source: https://Blockchain.News/news/anthropic-expands-ai-model-safety-bug-bounty-program