Those concerned with personal privacy in the AI era will require tools that not only protect their data but also poison data streams.
nah. llm can understand it is poison.
Please Login to reply.
An llm only understands the relationship between tokens.
muh llm understands spam and what is meaningless array of tokens.
It only knows what it’s trained on. My original point was about poisoning training data. Exhaust their resources, collectively make it economically infeasible to be a data broker.