Yesterday nostr:npub1wmr34t36fy03m8hvgl96zl3znndyzyaqhwmwdtshwmtkg03fetaqhjg240 put (new) "NIP-68" and a redraft of NIP-69 into the PR that was originally started two weeks ago.
https://github.com/nostr-protocol/nips/pull/457/commits/dd967e52211e6245a3c4db9998b31069cb2b628e
NIP-68 deals with labeling. It can be used for everything from reviews, to scientific labeling, to stock ticker symbols. It allows for both structured and unstructured labels to be put on _any_ applicable event. With NIP-68 authors can update and correct the labeling of their events after initial publication. It also allows third parties to add labels. (It is expected that client apps will restrict visibility of 3rd party labels to people in the labeler's "network" or trusted in some other way.)
NIP-69 was largely rewritten. It is now based on NIP-68 labels. It specifies two "vocabularies" that can be used for content moderation. One of the vocabularies is fairly set and rigid and deals with the types of moderation issues that are most likely to arise on Nostr. The other vocabulary is completely organic and open, and intended for things like regional moderation issues (e.g. insulting the Thai king). Client apps can use as much or little of the vocabularies as they like.
NIP-69 tries to establish a model where content moderation isn't black and white, but rather has many shades of gray. So people can recommend everything from showing the content, to content warnings, to hiding the content, to actual deletion.
Another "shades of gray" factor is that our approach to content moderation is based on the idea that everyone can be a moderator - it's just some moderators are trusted by more people than others. Moderators that are trusted by relay owners will obviously have the biggest impact since only relays can actually delete events. It's a bottom-up approach where people pick their own moderators. (The next step will be a NIP for "Trust Lists" so people can specify whose reports can filter their feed.) Given that censorship is an act of power and control where someone imposes their preferences on someone else, this approach to content moderation is highly censorship-resistant since it's a voluntary, opt-in scenario.
nostr:npub180cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsyjh6w6 nostr:npub1xtscya34g58tk0z605fvr788k263gsu6cy9x0mhnm87echrgufzsevkk5s nostr:npub1h52vhs2xcr8e7skg3wh020wtf4m9ad8wl0ksapam3p07z9jhfzqqpefjkq nostr:npub12vkcxr0luzwp8e673v29eqjhrr7p9vqq8asav85swaepclllj09sylpugg nostr:npub1g53mukxnjkcmr94fhryzkqutdz2ukq4ks0gvy5af25rgmwsl4ngq43drvk nostr:npub1v0lxxxxutpvrelsksy8cdhgfux9l6a42hsj2qzquu2zk7vc9qnkszrqj49 nostr:npub1n0sturny6w9zn2wwexju3m6asu7zh7jnv2jt2kx6tlmfhs7thq0qnflahe nostr:npub1jlrs53pkdfjnts29kveljul2sm0actt6n8dxrrzqcersttvcuv3qdjynqn nostr:npub16zsllwrkrwt5emz2805vhjewj6nsjrw0ge0latyrn2jv5gxf5k0q5l92l7 nostr:npub1pu3vqm4vzqpxsnhuc684dp2qaq6z69sf65yte4p39spcucv5lzmqswtfch