Just caught the end. Thanks for sharing and building on Nostr! đ¤
Tough question to answer in detail.
Both strive to address the centralisation issue with todayâs social media and their control, moderation and censorship.
Nostr is based on using a single defined fragment of data called an event. Everything in Nostr is an event. Different kinds exist, allowing different use cases - like a user profile, contact lists, reactions, login, etc. No one can stop you joining or using the protocol (no invite needed). Data is hosted by dumb relays that allow data redundancy and resilience against single point of failure (and event multi-point) censorship. However if you want the data to be hosted forever, youâll likely need to self host or pay for one or more relays. Your identity is your self-derived keys.
Bluesky seems to be a federated Mastadon model that attempts to have community hub servers that still host all your content - so you hope they donât just ban you one day. Itâs a complex AT protocol with lots of different formats and sub-protocols. It doesnât appear to use cryptography effectively - the server owns your secret/private keys on your behalf. Itâs also developed by a company as opposed to an open community.
And diversifying.. new people and different aspects of life.
Would you prefer an approach like this https://data.nostr.band - common kind for all query type, d-tag to specify query params. Or a separate kind dedicated to each of trending X, curated Y?
Ndapp is an interesting approach.
I think a common replaceable event kind published every 15-60 minutes (or daily) could be useful to client apps for trending. People can select from different pubkeys - for which seems better quality, etc. You lose historic, but likely ok.
Skews for people, topics, hashtags, relays, etc.
The main issue is scope, or abuse. Faking results to benefit them, or maybe not dynamic enough. And automated content thatâs repetitive, by count, ranks highly today without filtering.
Did you know that TikTok and Facebook and all those companies literally train their (mostly offshore) moderators and staff - with a classification list of content to target? Often people who donât speak the language of the source content are âmoderatingâ it.
Itâs the first step toward âregulationâ made easy. For censorship resistance properties, it should be made impossible or extremely costly.
There is literally a context code proposed for âPolitical Protestâ. Most countries have biased media, but many have extremely government controlled media. Nostr is a tool to allow real humans, and not ministry of truth, to communicate freely. The number of political elections that have media censorship and worseâŚ
The current proposed NIP is actually related to a business they are starting. It has little to do with the wider community benefiting from it - itâs how they seek to make money and moderate their business.
Iâm just glad it wonât work anyway. People just need to be aware powers without the same freedom of communication mindset will at some point attempt to try to censor Nostr.
For one, AI can do the classification in the future - even client side. Itâs likely a bad thing for censorship resistance, but alas you canât un-invent technology.
Two, agreed. Itâs a developing area. It impacts a lot of things. But just as people infiltrate LinkedIn and friend all your colleagues to look legit, the same will happen on decentralised networks. Network Trust schemes at scale are weirdly more susceptible to imposters.
Three, Iâm comparing how the protocol level should be designed for censorship resistance in all aspects. Itâs literally why Nostr exists. What people can censor is their own private/community relays - set one up, host whatever content you like, allow who ever you give permission to, ban people - but that doesnât impact the Nostr Protocol.
Ideas are great - but worthless without being executed in a way thatâs possible (it canât be a fake solution that is vapourware), and actually consider and address how they will fail or their side effects.
Well thatâs the key problem. The idea itself is actually fine. Itâs some arbitrary list of tags for content.
It has three key problems are all are execution failures.
1. This has never worked - ever - and people will never be consistent or accurate when self-tagging. Even porn websites donât have accurate tags.
2. Community tagging is an unsolved problem just the same. What if I DDOS all your content as illegal? It can be abused easily.
3. Letâs say we have a perfectly tagged system. Great. Now laws pass to make relays ban/delete certain tagged content. Great. No we lose all old content and no new content is tagged correctly in protest and the whole system dies anyway.
I shared some related ideas that can help this area progress user flexibility and choice - without creating censorship foundations.
1. Private follow lists for privacy
2. Relay selection at point of publish
3. Client App âcontent-warningâ butting at point of publish
4. New key for profile meta to self-tag account with âcontent-warningâ
5. Possible new relay metadata key to self-flag relay as having sensitive content
6. Again, use your own private relay and have whatever rules you want
Be wary of formal fine-grained content 'classification' proposals for Nostr that claim to help individual communities or specific groups (sexuality, children, religion); as they are effectively future backdoor censorship for all content - even if well intentioned today.
If your community or group needs to opt-into a 'self-isolation bubble' for whatever reason, that's a perfect use case for community run private relays. You can even moderate, and censor your community with your own rules, and even have special punishments - it just doesn't belong in a protocol designed to prevent censorship and arbitrary moderators making decisions on behalf of others.
If they can stop you from using their network as a new user.. they can stop you using their network as a highly invested user with all your content.
And if you read their proposed TOS, they own all your content. Even if thatâs legally not possible, it shares their mindset, intent, and values clearly.
Needing an invite was the first KO.
UIKit has an image similarity comparator too.
https://stackoverflow.com/questions/71615277/image-similarity-in-swift
Itâs something youâll likely have data for client side anyway, as itâs similar to web of trust - so caching who your followerâs follow and running a membership check can be done locally.
It doesnât protect against someone you trust changing their profile to impersonate - as they will likely already be followed by people you follow and have follower counts, etc.
Here was an idea I had to combat it
nostr:note1h7hrag8300cnx6z3ua6qs5d8ne4p5eus6uuavjzlphm66m27t42sr44rc9
Could client apps perhaps store a local hash of name/display_name and profile image for pubkeys they follow, and then detect a duplicate/mismatch?
(Iâm glossing over how image hashing/similarity matching could be calculated).
The pubkey with a newer profile update becomes the suspected impersonation, and the app could flag or show UX as less/untrusted awaiting user input.
I think name and profile image are the two major things people read to match identities, since both are displayed in the timeline.
Could even be used in global - âthis post pubkey has a imitating name/profile image match to someone you follow.â
Yep. It depends entirely on what the underlying rules and transparency of votes captured are. Formal, informal, for fun, quick poll to help guide a decision.
Itâs possible for a pollâs results to be
filtered by votes only from your following/social group. This happens today anyway, as if your poll your discord, the participants can only include those who know about the poll to begin with.
All in all, I donât really have much trust in polls as a method for capturing meaningful results.
RIP. As a kid, he helped show how crazy some people live - and in a weird way; just how crazy lives could become.
