the main reason why i have something to contribute is because i'm working intensively on my pet project, https://x.realy.lol to do a really fast, comprehensive search engine including standard filters and full text indexes, and in the process as i elaborate the ideas i get from what i learn about how they work, realising exactly how AI assistants on search engines work.

i think the "replacing humans" chatter is way overstated, because really this is just like any tool that increases productivity, it reduces the manpower needed to do a given task. all technology ultimately displaces human labor and that's the reason why it's invented. i think that some people, who are misanthropic and cruel minded, actually hate all people and fantasize about living in a world where everyone says yes to them instead of pointing out valid points of disagreement, or even, getting so mad at the actions being taken that they take up arms. these people increasingly use soft power, psychological manipulation, but then when that fails they bring out their hired guns (and in the future that will be drones too).

best to focus on how it empowers us, because we can also defend ourselves with soft power, and that's really central to the whole mission of bitcoin and nostr, in my opinion.

Reply to this note

Please Login to reply.

Discussion

They will use AI to replace the humans

...unless we figure out a way to leverage AI to make humans *more* than the AI alone. So that the AI turns the human into a Superhuman, rather than obsoleting humans.

Which is why we have to think, carefully, about the best way to implement AI, rather than just vibe-coding in an LLM and calling it a day.

as i see it, the core principle battlefield is about having the ability to evaluate information in the face of the endless barage of bullshit designed to confuse and divide people

we are in the beginning phases of this battle, up to recently, they only had "big data" datamining to do this, but now they are teaming that up with language models to improve their ability to find things, but not just to find them (since they have roped most of the world onto the social networks to feed them data) but also to fabricate data to poison the discourse

this also highlights the idea that we can potentially also use these tools offensively to poison our own data. there was a meme someone posted about this a while back to deliberately jumble and confuse their communications to obstruct AIs. this could be done with AI LLMs more effectively.

on the defensive side, this is why AUTH is so important, we need to be able to restrict access to our discourse so it doesn't become intelligence data for attacks on us.

like, one way to do the data poisoning would be to have an AI mangle your texts, and tell the server to return the poisoned version to the public readable side, while only users tagged in a conversation can read the real text

Poison pill all the things. Interesting.

it can be done client side also, but you have to be able to tell the relay which is poison and which is ambrosia