Cryptographers Show That AI Protections Will Always Have Holes
Large language models such as ChatGPT come with filters to keep certain info from getting out. A new mathematical argument shows that systems like this can never be completely safe.
The post [Cryptographers Show That AI Protections Will Always Have Holes][1] first appeared on [Quanta Magazine][2]