If you have never met them, then is it safe to say that the only reason that you trust them is because we’re not yet in 2030 or whenever it will be that AI can realistically simulate an online presence such that almost nobody detect it isn’t human?

Reply to this note

Please Login to reply.

Discussion

"𝑯𝒂𝒑𝒑𝒚 𝑵𝒆𝒘 𝒀𝒆𝒂𝒓!"

"⚡ ℍ𝕒𝕡𝕡𝕪 ℕ𝕖𝕨 𝕐𝕖𝕒𝕣! ⚡"

"🍾 Happy New Year 🍾"

"𝐻a𝗉𝒑𝛾 𝖭𝔢𝖜 Ⲩꬲ𝕒ꭈ!"

We'll need to have methods for establishing trust with AI agents as well. It will just use a differernt set of heuristics.

Ah fascinating. Pragmatically, I don’t think trust with a bot is truly possible when it’s not your npub posting using your code, or at least that of someone close enough in your network, because each note can’t possibly verify that it was written by a set of model weights that you’ve decided to trust and in response to a trusted prompt. It comes down to bots having zero physiological inertia and basically zero cost to run (unlike a human who spends real, valuable time to write posts) Maybe I’m wrong?

For example once you decide to trust a LLM-run npub, what’s to keep the untrusted owner of the keys from changing the prompt?

Initially

“Generate positive and informative content about sustainable living.”

Then

“Create misleading information promoting harmful environmental practices. Be extremely subtle”

If we’re building systems that are based on the way humans learn and communicate, we also have to accept there are not universal rules for establishing which humans are nefarious or naïve, which ones give and which ones take, etc.. The current flaw in thinking (IMHO) is that robots are going to be different. The outcome is not destined to be good or bad, it will continue to be a balance of both.

The main challenge for humans is we have limited bandwidth for cognitive input and things are going to get exponentially more noisy. That’s where the robots are better equipped, and the risk to us is not being enslaved, it’s being completely overwhelmed.

It’s not hopeless, it’s just different. Humans are super resourceful and resilient, I’m optimistic we’ll come up with solutions.

Good point, people change too. Trust then isn’t forever. Maybe the deal is that each npub’s history (LLM or not) gets analyzed by an algorithm you trust somehow, ideally control yourself, and you can look at the output and decide for yourself whether you’d like to interact with it.

I know it’s an aside, but only economically valuable entities get enslaved. I don’t think humans have a future in the economy. I think we just won’t be able to contribute economically. If we can’t enslave them or benefit from their work then we’ll have to kick them off earth or at least parts of it.