I can't stop thinking about the clip I saw of Joe Rogan and Duncan Trussel...they don't understand how people are not aware of how monumentally catastrophic this AI stuff could be...and all I can think is all these people have been through a new panic every day for the last - i dont know - five years, that when they heard aliens are real they all said does that help pay my rent which doubled in a couple of years but i get the same pay....after scrolling tiktok and realizing i was waiting for the next social media hit, they are doing this all day long. It is another play toy for them, and they will move on to the next celebrity news, or major breaking foreign news story, or the next filter that makes them look like a baby....its just another thing to pass the day. There was another podcast I was listening, and the idea that AI will be producing all the results eventually. We are going to become a game or we are gonna tune off, maybe we will eventually find nature again. Do we really need the internet? Do we need to be connected like that? Is it making a difference? Especially with all the AI. It will start reading minds, if it doesn't already do so. Big money will continue to exploit us. I guess it is all inevitable. Far-fetched sounding? I think more and more people understand now but its like "it is what it is, what's this new feature?" They can't do anything about it, so they just focus on their little microjoys (i have this book, i need to read it). Times are scary, but I feel Nostr is the final answer to making this world healthier, as long as the people understand through all the "stress tests", we have one objective. To give people their power back. #lastdeepthoughtof2023 #notproofed
Discussion
It is an understandable concern to have with AI. Luckily here on nostr people understand the importance of having it as open as possible. Similar to being hooked or dependent on a pharmaceutical drug, people will be dependent on technological tools. The AI will change the way they think. The tool will learn, and they will not.
With proper education I believe large language models (LLMs) can provide more good than bad. Though a community effort instead of cooperate and government interests needs to be the driving force.