Avatar
DataNostrum
4aa4d22440770429fa745b674fab7e46ed267f36a1abae6ff4e8d26eb65b7f52
Stumbling around

phone numbers are stupid

Wait, GBTC has inflows? I thought they were bleeding out and everything was getting gobbled up by Blackrock.

For centuries, alchemists sought the philosopher's stone, a mythical substance able to transmute base metals into gold.

Satoshi continued that search and found something far better: a simple way to transmute energy into money.

#Bitcoin

"what a U.S. Dollar actually is: It is simply a promise, by the U.S. sovereign government, that it will accept the Dollar as payment for a Dollar’s worth of taxes."

-- J.D. Alt (DIAGRAMS & DOLLARS)

"Are you sure that you want to chisel the words 'penis butter' into stone for all eternity"

Yes, we do it intuitively.

Making it explicit is a difficult job, I agree. Because the abstraction needs to be simple enough that people actually want to use it.

Interesting, how would those encrypted notes work? Conceptually I could imagine encrypting a note for 1-of-N multisig, where N is the number of trusted recipients. Probably tricky to implement.

Agreed.

In my opinion, though, you should also be able to trust some *for something specific*, not trust them across the board. For instance, I may trust the entity "shitcoin bots tracking collective" to provide me with a list of npubs to block, or I may even grant it the rights to automatically block them for me. (At a later point, when I notice that the "shitcoin bots tracker" has become overzealous, or maybe has quality issues, I can revoke that privilege.)

This is different from trusting that entity unconditionally for everything, there should always be a scope.

I agree that web of trust needs to be involved.

But:

1) you shouldn't have one web of trust, you should have many. In particular, you should have one web of trust for each type of content that annoys you. You should have a web of trust to filter shitcoin content, another web of trust to filter unwanted sexual attention, etc.

2) manually blocking reported npubs shouldn't be the only option - not scalable. You should be able to subscribe to a content-specific group, to which you delegate the ability to block offending npubs for you.

Ask yourself, when you "report" a note or npub, WHO SHOULD THE REPORT GO TO?

Should the report go to all of Nostr? No, because Nostr is censorship resistant by design.

Should the report go to your relays? Well, can a relay operator know or care about all the things that are important to its users? Should a relay block content for all its users? I think mostly not.

My answer is that your report should go to the group(s) that care(s) about avoiding that particular kind of content. And you should be able to benefit from the collective intelligence and work of that group to curate your feed.

#Nostr #nostrdev

Yes, great questions.

I think topic specificity would sort itself out naturally. People could create a censor with a stated goal/mission. Those that have an unclear or overly broad mission will not attract many subscribers/participants.

Not sure what is meant by composability, but I think you should be able to subscribe to as many censors as you want.

Concerning the curation part, I think there is actually a huge opportunity to even make this enjoyable, e.g. through gamification. Prolific 'hunters' of specific unwanted content could get a top spot on a leaderboard, and receive zaps from satisfied subscribers.

I am not talking about 5 reports from friends.

I am talking about 10000 subscribers to "women of nostr" who collectively curate their feeds to not include harassing npubs.

And also about 2000 subscribers to "edgy women of nostr" who think "women of nostr" is too prudish, but still want to block the worst of the lot.

And so on.

IMO it's a necessary step, because each group is uniquely positioned to identify content that is unwanted to that group.

I think the client is also the wrong place to put censorship. Censorship shouldn't be applied to the feeds of all users of an app.

You should be able to "subscribe" to censorship groups that are meaningful to you, based on your lived experience.

A woman who gets ugly messages from creeps would subscribe to a censor that is dedicated to detect that type of content. Maybe she also wants to subscribe to the shitcoin censor which filters shitcoin giveaway scam bots, but not necessarily! Maybe she's a shitcoiner and enjoys this content, and would actually want to subscribe to a censor that filters out toxic bitcoin maxis instead 😄

Censorship is great, as along as anyone is free to select which filters they use.

What's your take on this nostr:npub1m4ny6hjqzepn4rxknuq94c2gpqzr29ufkkw7ttcxyak7v43n6vvsajc2jl ?

Interesting - but reporting to whom?

I don't think it should be to relays, because relays should be simple and censorship free. It's the wrong place to put this logic.

IMO censorship should be chosen by every npub individually (or by an entity that they explicitly empower to censor for them, and from which they can also revoke that power if it goes overboard).