Avatar
Blake
b2dd40097e4d04b1a56fb3b65fc1d1aaf2929ad30fd842c74d68b9908744495b
#Bitcoin #Nostr #Freedom wss://relay.nostrgraph.net

Just caught the end. Thanks for sharing and building on Nostr! 🤙

Tough question to answer in detail.

Both strive to address the centralisation issue with today’s social media and their control, moderation and censorship.

Nostr is based on using a single defined fragment of data called an event. Everything in Nostr is an event. Different kinds exist, allowing different use cases - like a user profile, contact lists, reactions, login, etc. No one can stop you joining or using the protocol (no invite needed). Data is hosted by dumb relays that allow data redundancy and resilience against single point of failure (and event multi-point) censorship. However if you want the data to be hosted forever, you’ll likely need to self host or pay for one or more relays. Your identity is your self-derived keys.

Bluesky seems to be a federated Mastadon model that attempts to have community hub servers that still host all your content - so you hope they don’t just ban you one day. It’s a complex AT protocol with lots of different formats and sub-protocols. It doesn’t appear to use cryptography effectively - the server owns your secret/private keys on your behalf. It’s also developed by a company as opposed to an open community.

Replying to Avatar brugeman

Would you prefer an approach like this https://data.nostr.band - common kind for all query type, d-tag to specify query params. Or a separate kind dedicated to each of trending X, curated Y?

Ndapp is an interesting approach.

I think a common replaceable event kind published every 15-60 minutes (or daily) could be useful to client apps for trending. People can select from different pubkeys - for which seems better quality, etc. You lose historic, but likely ok.

Skews for people, topics, hashtags, relays, etc.

The main issue is scope, or abuse. Faking results to benefit them, or maybe not dynamic enough. And automated content that’s repetitive, by count, ranks highly today without filtering.

Did you know that TikTok and Facebook and all those companies literally train their (mostly offshore) moderators and staff - with a classification list of content to target? Often people who don’t speak the language of the source content are ‘moderating’ it.

It’s the first step toward “regulation” made easy. For censorship resistance properties, it should be made impossible or extremely costly.

There is literally a context code proposed for “Political Protest”. Most countries have biased media, but many have extremely government controlled media. Nostr is a tool to allow real humans, and not ministry of truth, to communicate freely. The number of political elections that have media censorship and worse…

The current proposed NIP is actually related to a business they are starting. It has little to do with the wider community benefiting from it - it’s how they seek to make money and moderate their business.

I’m just glad it won’t work anyway. People just need to be aware powers without the same freedom of communication mindset will at some point attempt to try to censor Nostr.

For one, AI can do the classification in the future - even client side. It’s likely a bad thing for censorship resistance, but alas you can’t un-invent technology.

Two, agreed. It’s a developing area. It impacts a lot of things. But just as people infiltrate LinkedIn and friend all your colleagues to look legit, the same will happen on decentralised networks. Network Trust schemes at scale are weirdly more susceptible to imposters.

Three, I’m comparing how the protocol level should be designed for censorship resistance in all aspects. It’s literally why Nostr exists. What people can censor is their own private/community relays - set one up, host whatever content you like, allow who ever you give permission to, ban people - but that doesn’t impact the Nostr Protocol.

Ideas are great - but worthless without being executed in a way that’s possible (it can’t be a fake solution that is vapourware), and actually consider and address how they will fail or their side effects.

Well that’s the key problem. The idea itself is actually fine. It’s some arbitrary list of tags for content.

It has three key problems are all are execution failures.

1. This has never worked - ever - and people will never be consistent or accurate when self-tagging. Even porn websites don’t have accurate tags.

2. Community tagging is an unsolved problem just the same. What if I DDOS all your content as illegal? It can be abused easily.

3. Let’s say we have a perfectly tagged system. Great. Now laws pass to make relays ban/delete certain tagged content. Great. No we lose all old content and no new content is tagged correctly in protest and the whole system dies anyway.

I shared some related ideas that can help this area progress user flexibility and choice - without creating censorship foundations.

1. Private follow lists for privacy

2. Relay selection at point of publish

3. Client App ‘content-warning’ butting at point of publish

4. New key for profile meta to self-tag account with ‘content-warning’

5. Possible new relay metadata key to self-flag relay as having sensitive content

6. Again, use your own private relay and have whatever rules you want

Be wary of formal fine-grained content 'classification' proposals for Nostr that claim to help individual communities or specific groups (sexuality, children, religion); as they are effectively future backdoor censorship for all content - even if well intentioned today.

If your community or group needs to opt-into a 'self-isolation bubble' for whatever reason, that's a perfect use case for community run private relays. You can even moderate, and censor your community with your own rules, and even have special punishments - it just doesn't belong in a protocol designed to prevent censorship and arbitrary moderators making decisions on behalf of others.

If they can stop you from using their network as a new user.. they can stop you using their network as a highly invested user with all your content.

And if you read their proposed TOS, they own all your content. Even if that’s legally not possible, it shares their mindset, intent, and values clearly.

It’s something you’ll likely have data for client side anyway, as it’s similar to web of trust - so caching who your follower’s follow and running a membership check can be done locally.

It doesn’t protect against someone you trust changing their profile to impersonate - as they will likely already be followed by people you follow and have follower counts, etc.

Here was an idea I had to combat it

nostr:note1h7hrag8300cnx6z3ua6qs5d8ne4p5eus6uuavjzlphm66m27t42sr44rc9

Could client apps perhaps store a local hash of name/display_name and profile image for pubkeys they follow, and then detect a duplicate/mismatch?

(I’m glossing over how image hashing/similarity matching could be calculated).

The pubkey with a newer profile update becomes the suspected impersonation, and the app could flag or show UX as less/untrusted awaiting user input.

I think name and profile image are the two major things people read to match identities, since both are displayed in the timeline.

Could even be used in global - “this post pubkey has a imitating name/profile image match to someone you follow.”

RIP. As a kid, he helped show how crazy some people live - and in a weird way; just how crazy lives could become.