I like the idea of self-reporting.
Community reporting of others' notes could be gamed by a bunch of bots that have been set to target a certain user.
With that said, a check by the community isn't a bad thing to protect users that want it.
I seem to remember that a tweet could be obscured with the message that this tweet has been reported to contain sensitive content for some users. Tweet wasn't deleted, but you had to click through.
You could have clients have a toggle to turn that feature on/off.
If we think a couple years out and Nostr powers a good bulk of social media. Classrooms use it with their own relays, etc.
It'd be great to have some protection built in for minors that will be using it.