If you want people to understand the differences between platforms, you

have to accept that that comes with a learning curve.

Either people LEARN some programmer jargon, and then they can take advantage

of the versatility and openness of the platform... or nostr turns into another

oversimplified, unnecessarily addictive clone of an existing big tech

platform.

Yes, some advantages (not all) will remain in the background, but people

won't be able to utilize that.

Most don't want to learn. They want to use nostr just like any other platform,

without thinking about what that implies.

And many devs go along, because more people means more popularity or

more money.

Let me give 2 examples:

1.

Nostr is a permissionless platform, meaning anyone can (should be able to)

make a new keypair any time they wish. But they don't want to.

They want to get verified instead. They prefer a large follower base over

privacy.

And the verification service? You betcha. Centralized. It has to be, unless you're

willing to make an effort to personally verify (to your best ability) the identity

of the person you're talking to (or at least that they're in fact real).

2.

Content filters. You can either learn you make your own, or you can rely

on a centralized provider. Who would bother writing their own, apart from

nerds?

Reply to this note

Please Login to reply.

Discussion

case and point: you filled your reply with jargon and gatekeeping crosswords. it's not up to you to determine "most don't want to learn" and then refuse them access to accurate, simple language in which to assess circumstances. people do not learn in code first. they learn basic language. then they build symbolic and layered linguistic comprehension on top of the baseline. my post was specific and directed at nostr:npub1sg6plzptd64u62a878hep2kev88swjh3tw00gjsfl8f237lmu63q0uf63m not you anyway.

what difference does it make whether an account is "real" or not? especially considering much of social media is run by teams automating even verified human posts now. i find it ironic the programmers who are the most entrenched in building ai models are also the most obsessed with identity verification and validator maximalisation. the profiteers who create the problem are also creating the "concerns about safety" ... sounds profitable.

i replied to everyone -

interesting how in all the commentary i conversed back to you - you chose to fixate on the one comment i made which you yourself didn't respect and i did.