as i see it, the core principle battlefield is about having the ability to evaluate information in the face of the endless barage of bullshit designed to confuse and divide people
we are in the beginning phases of this battle, up to recently, they only had "big data" datamining to do this, but now they are teaming that up with language models to improve their ability to find things, but not just to find them (since they have roped most of the world onto the social networks to feed them data) but also to fabricate data to poison the discourse
this also highlights the idea that we can potentially also use these tools offensively to poison our own data. there was a meme someone posted about this a while back to deliberately jumble and confuse their communications to obstruct AIs. this could be done with AI LLMs more effectively.
on the defensive side, this is why AUTH is so important, we need to be able to restrict access to our discourse so it doesn't become intelligence data for attacks on us.
like, one way to do the data poisoning would be to have an AI mangle your texts, and tell the server to return the poisoned version to the public readable side, while only users tagged in a conversation can read the real text
Poison pill all the things. Interesting.
it can be done client side also, but you have to be able to tell the relay which is poison and which is ambrosia
Thread collapsed
Thread collapsed
Thread collapsed