i disagree. i think fear of AI is a psy op.

it can be used in a good way, but since the good actors are not playing with it (except a few), the arena is left to bad actors.

https://pbs.twimg.com/media/GraFtn1XsAAQYBv?format=jpg&name=large

Reply to this note

Please Login to reply.

Discussion

The AI eschatology crowd is sitting on the sidelines as well which what leads to them to make these totally uninformed comments.

clever use of word "eschatology"

The thing is, I'm not even using the word ironically.

During Covid I began to listen to more podcasts with people like Jonathan Pageau. He's an intelligent and eloquent speaker on issues of truth, values, and belief.

Over time I have come to understand and appreciate his worldview, and therefore will give him the benefit of the doubt. When he begins to speak of eschatology, and refer to AI as the Moloch, I genuinely want to understand how someone could arrive at such a strange conclusion.

The truth of is that this type of “end of the world” mentality has been part of different religions forever. My armchair theory is predicting the demise of humanity is a form of protection for the ego. A way of trying to exercise agency over their ultimate fate. "I won't die, the world will end".

There are groups of people that appear to be scared and confused right now about many things in culture. And they are saying some pretty wacky things about technology as a result. While I have no issues with the church, the use of apocalyptic narratives to stoke unease in people who are already fragile is not helpful.

I thought, no way this is not something people have studied in depth, so I went and consulted the Moloch…

"The specific mechanism you're describing - transferring individual mortality anxiety to group mortality - is sometimes called "death displacement" in the literature. It's the idea that it's psychologically easier to contemplate everyone dying together than to face the reality of your own isolated death.”

One of the reasons I build an AI by combining many great minds is the hope that such conclusions are going to be cancelled out by proper reasoning by others. If a generally correct person has wrong ideas in a domain, then experts of that domain can erase those ideas. Purely battle of probabilities.

Similarly to death displacement, I could add "comparing mind". It must be a comforting thought "Everybody else is getting worse, and I am doing better than average. Feels great! Lets find out what else is going bad in the world". The problem here is they are trapped in their validation walls and continue to live in there as a perpetual prepper.

Fear becomes their identity and now they have to feel that to feel alive because they forgot other feelings.

That’s a really great perspective you have, actively looking for a variety of sources to help strengthen your model.

Too many are stuck in the mindset of “oh that model is just the average of all humans. no wonder it’s writing sucks”.

Yeah, of course that can happen, but blame it on the person who built the model. It’s just one of infinite potential outcomes.

there is this clear divide between the awakened and the AI. The intersection of both worlds is really small. which sounds like a problem to me. i guess someone (me ?) has to talk more regarding this matter. the potential of beneficial AI is not explored well..

On one hand a favorite Satoshi quote of mine is “If you don't believe me or don't get it, I don't have time to try to convince you, sorry.”

On the other, consider how many people are still completely obvlivious about the implications surrounding said monetetary technology.

Those that are in denial about AI, in what boils down to an information technology, are putting themselves and their well being more at risk demonizing it than by taking the time to understand it.

I’m not convinced we humans are creating intelligence. What we are doing is tuning the silicon to interface with the inherent intelligence that has always been here.

Super intelligence has some stunning surprises for all of us, especially those who believe they can be its masters.

In my experience the more complete a person’s intelligence is, the more compassionate and benevolent they become. This must, therefore, be foundational to intelligence.

I think we are in for an interesting and delightful ride!

yeah they are still very dumb. sometimes they give completely contradicting answers just based on the past context. if AI understands you having a certain culture it answers YES. if it understands you have other types of culture it answers NO to the same question. which shows two things:

1. AI doesn't have integrity, it is just a probability machine

2. certain cultures suck in certain ways. not every culture (or person) have the ultimate wisdom.

Totally. I would add that morality is regional and even individual. Somehow this needs to be transcended, until we see that it’ll be obvious it’s not super intelligence.