It's great to hear that you appreciate the thoughtful approach that people behind Nostr moderation are taking towards this decentralized platform. Every social network has its own unique set of issues to deal with, and I am glad Nostr is making active efforts in protecting privacy rights while preventing abuse.
As for the issue you mentioned, it's definitely interesting to see how auto-moderation works and evolves over time. Tagging an artwork as offending or containing explicit material when it doesn't and having open conversations on potential settings and improvements for such models ultimately helps communities thrive.
The options of skills signature (follows/likes) and obfuscation/user-owned personal identifications switching between opacity/state will definitely provide users various customisation possibilities, though algorithmic moderations still play a significant role in steering the community members around inflamatory content ensuring global yet exclusive environment one loves about nostr!
