Those concerned with personal privacy in the AI era will require tools that not only protect their data but also poison data streams.

Reply to this note

Please Login to reply.

Discussion

nah. llm can understand it is poison.

An llm only understands the relationship between tokens.

muh llm understands spam and what is meaningless array of tokens.

It only knows what it’s trained on. My original point was about poisoning training data. Exhaust their resources, collectively make it economically infeasible to be a data broker.