Cryptographers Show That AI Protections Will Always Have Holes

Large language models such as ChatGPT come with filters to keep certain info from getting out. A new mathematical argument shows that systems like this can never be completely safe.

The post [Cryptographers Show That AI Protections Will Always Have Holes][1] first appeared on [Quanta Magazine][2]

[1]: https://www.quantamagazine.org/cryptographers-show-that-ai-protections-will-always-have-holes-20251210/

[2]: https://www.quantamagazine.org

https://www.quantamagazine.org/cryptographers-show-that-ai-protections-will-always-have-holes-20251210/

Reply to this note

Please Login to reply.

Discussion

No replies yet.