If we’re building systems that are based on the way humans learn and communicate, we also have to accept there are not universal rules for establishing which humans are nefarious or naïve, which ones give and which ones take, etc.. The current flaw in thinking (IMHO) is that robots are going to be different. The outcome is not destined to be good or bad, it will continue to be a balance of both.

The main challenge for humans is we have limited bandwidth for cognitive input and things are going to get exponentially more noisy. That’s where the robots are better equipped, and the risk to us is not being enslaved, it’s being completely overwhelmed.

It’s not hopeless, it’s just different. Humans are super resourceful and resilient, I’m optimistic we’ll come up with solutions.

Reply to this note

Please Login to reply.

Discussion

Good point, people change too. Trust then isn’t forever. Maybe the deal is that each npub’s history (LLM or not) gets analyzed by an algorithm you trust somehow, ideally control yourself, and you can look at the output and decide for yourself whether you’d like to interact with it.

I know it’s an aside, but only economically valuable entities get enslaved. I don’t think humans have a future in the economy. I think we just won’t be able to contribute economically. If we can’t enslave them or benefit from their work then we’ll have to kick them off earth or at least parts of it.