Ah… I think you might have given the first solid answer!
So PoS requires fiat investment? So it favors rich people? Am I understanding you correctly?
Can you explain what costs PoS has they PoW doesn’t have? I honestly want to understand the downside (other than the fact that my friend’s ETH mining rig is now useless).
Sorry. I don’t follow… Are you saying ETH stopped functioning when they went PoS? That they could only “look at a picture of” their ETH?
Sorry - I don't really get your point… The value of ETH hasn't exactly collapsed since going PoS. In the real world how much work you put into something doesn't determine it's value. Perceived value is real value.
Honest question from a non-crypto guy… "Sure, the sky isn't falling or anything, but wouldn't it be better if Bitcoin were proof-of-stake?" I mean is there any real advantage to PoW?
Today's great song… Who doesn't love Marvin Gaye?
https://music.apple.com/us/album/i-want-you-vocal/1452851701?i=1452851705
I'm trying to use it and don't understand "Login Key" and "Wallet ID".
Is "Login Key" nsec?
Is "Wallet ID" LNURL? If so, can't you get that by pulling our profile?
When you publish something, it goes on our feed, right?
What you’re describing is content moderation spam. You deal with content moderation spam the same way you deal other forms of spam. In what I’ve proposed if you don’t specify someone as being one of your moderators, you never get affected by their moderation reports. No one can filter your feed unless you agree to it in one way or another.
The only people who will need to interact with the spammers are relay owners. But that just comes with the territory.
The PR that most likely prompted nostr:npub1ktw5qzt7f5ztrft0kwm9lsw34tef9xknplvy936ddzuepp6yf9dsjrmrvj to make his comment is not one-size-fits all. It specifies that the user chooses their moderators. Which is the opposite of one size fits all.
Super!
BTW, I realized last night that I need to close the loop and show people how the moderation labels will be used by clients so they understand the limits of any "censorship". So I added a section on "Moderator Lists" which define how users can choose their own "moderators" or "super moderators". Because apps _could_ insert things into their moderator lists (nefariously or for good reason - like the user is a kid), I've put into the NIP that client apps MUST clearly disclose it anytime they apply a "base-level" of moderation.
Here's my draft of 69 that has that included…
https://github.com/s3x-jay/nostr-nips/blob/master/69.md
If you have time to read it over I'd love to have your input. If you like it, feel free to grab the "raw" source and add it to what Rabble committed yesterday. (My account doesn't have permission to do that since I'm a newbie on git).
Two problems with what you're proposing…
1) How do the relays know what to moderate? Someone needs to report it, which (in the world of Nostr) means there's a public record of it. As soon as it exists, clients will use it to moderate (as Amethyst does now).
2) How do people who have different views talk to each other if they're on different relays? The whole point of Nostr is that you can have a lovely, weird mix of content based on who you follow and what relays you use.
I remember when that came out in the early '90s. It felt a bit wrong to be dancing to the travails of homeless people!
That's the "compete to get accepted" bit I was mentioning. It makes them seem like a desirably club that you're missing out on. Makes people want to try it. Clubhouse did the same thing.
If I may make some suggestions…
I would love to share this with creators/producers but I just know they'll find it completely confusing. I'd suggest you assume the user knows nothing about Nostr, Lightning, zaps, sats, etc.
The explainer can go below the form and even be hidden until they click a "explain this to me" button, but it needs to be there…
Explain the general concept…
- One paragraph on Nostr
- One paragraph on zaps and how they're like likes, only small amounts of money
- One paragraph on the idea of using zaps to unlock the content
- One paragraph on whether the user has to be on ZapIt.live or whether they can be on any Nostr client.
Then explain what needs to be set up if they're not on Nostr or don't have a Lightning wallet…
- One paragraph on the best place to go to set up a Nostr profile
- One paragraph on getting set up with something really simple like WoS
- One paragraph on adding their LNURL to their Nostr profile.
Also… I'd suggest you change "Minimum Price" to "Minimum Price (sats)".
Yesterday nostr:npub1wmr34t36fy03m8hvgl96zl3znndyzyaqhwmwdtshwmtkg03fetaqhjg240 put (new) "NIP-68" and a redraft of NIP-69 into the PR that was originally started two weeks ago.
https://github.com/nostr-protocol/nips/pull/457/commits/dd967e52211e6245a3c4db9998b31069cb2b628e
NIP-68 deals with labeling. It can be used for everything from reviews, to scientific labeling, to stock ticker symbols. It allows for both structured and unstructured labels to be put on _any_ applicable event. With NIP-68 authors can update and correct the labeling of their events after initial publication. It also allows third parties to add labels. (It is expected that client apps will restrict visibility of 3rd party labels to people in the labeler's "network" or trusted in some other way.)
NIP-69 was largely rewritten. It is now based on NIP-68 labels. It specifies two "vocabularies" that can be used for content moderation. One of the vocabularies is fairly set and rigid and deals with the types of moderation issues that are most likely to arise on Nostr. The other vocabulary is completely organic and open, and intended for things like regional moderation issues (e.g. insulting the Thai king). Client apps can use as much or little of the vocabularies as they like.
NIP-69 tries to establish a model where content moderation isn't black and white, but rather has many shades of gray. So people can recommend everything from showing the content, to content warnings, to hiding the content, to actual deletion.
Another "shades of gray" factor is that our approach to content moderation is based on the idea that everyone can be a moderator - it's just some moderators are trusted by more people than others. Moderators that are trusted by relay owners will obviously have the biggest impact since only relays can actually delete events. It's a bottom-up approach where people pick their own moderators. (The next step will be a NIP for "Trust Lists" so people can specify whose reports can filter their feed.) Given that censorship is an act of power and control where someone imposes their preferences on someone else, this approach to content moderation is highly censorship-resistant since it's a voluntary, opt-in scenario.
nostr:npub180cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsyjh6w6 nostr:npub1xtscya34g58tk0z605fvr788k263gsu6cy9x0mhnm87echrgufzsevkk5s nostr:npub1h52vhs2xcr8e7skg3wh020wtf4m9ad8wl0ksapam3p07z9jhfzqqpefjkq nostr:npub12vkcxr0luzwp8e673v29eqjhrr7p9vqq8asav85swaepclllj09sylpugg nostr:npub1g53mukxnjkcmr94fhryzkqutdz2ukq4ks0gvy5af25rgmwsl4ngq43drvk nostr:npub1v0lxxxxutpvrelsksy8cdhgfux9l6a42hsj2qzquu2zk7vc9qnkszrqj49 nostr:npub1n0sturny6w9zn2wwexju3m6asu7zh7jnv2jt2kx6tlmfhs7thq0qnflahe nostr:npub1jlrs53pkdfjnts29kveljul2sm0actt6n8dxrrzqcersttvcuv3qdjynqn nostr:npub16zsllwrkrwt5emz2805vhjewj6nsjrw0ge0latyrn2jv5gxf5k0q5l92l7 nostr:npub1pu3vqm4vzqpxsnhuc684dp2qaq6z69sf65yte4p39spcucv5lzmqswtfch
I would suggest client apps make better use of the `t` tag on kind 0 to store things that can identify the user’s interests and their community/tribe. That should really be pushed by apps. Then use that to return profiles when users do searches.
From my experience search on Nostr really sucks and that’s one of the ways. Part of the problem is not wanting to return spam. For people searches that can be reduced using extended follower networks and other measures of trust.
Bluesky is getting a few things right…
The compete to get accepted thing is brilliant. Clubhouse did that and it was really hot - until it wasn’t.
I’ve heard no onboarding complaints. Then again it seems to be mirroring the experience of corporate social media - for both better and worse.
I think most of the apps are missing the significance of the NIP-05 relay list. IMHO those should be seen as semi-mandatory and in cases like contact lists you don’t act before hearing from those servers (or being refused).
By “semi-mandatory” I mean a user may temporarily stop writes to them, but always reads from them and can’t delete them from their relay list. If they don’t want them they should change their NIP-05 validation. Otherwise NIP-05 really does mean nothing.
Can you explain this further?
> “Decentralised moderation sounds fun and all, until it’s used by the state against the people.”
If it’s decentralized, how can a government control it? I feel like you’re not really understanding the power of a decentralized system.
I woke up this morning with what I think is a good idea… I'll write something that analyzes the follower lists and recent DMs on my Nostream relay to determine who people on my relay interact with the most who don't already have write access to the relay. Then I'll white list the top X people on that list.
Which raises a question - when User A responds to User B (publicly or via DM) will User A's client ever try to write to a relay it knows User B uses but which is not on User A's relay list? For example if User B's NIP-05 says their on Relay C, but User A retrieved the message from User B via Relay D, will User A's client ever try to write to both C and D even though C isn't on A's relay list?
Clearly my plan will work if the unpaid user reads from my relay. But it won't be particularly effective unless clients try to respond to all the known relays for the other user.
