Another thing we could consider is introducing “trust scores” to get a high level view of the trustworthiness of the editor who’s edits are being shown.
This may sound shady, but we could in theory devise a score that looks at the overall attributes of the user to try to understand if they have good intentions or not. The more factors you consider in a trust score, the better. What might that include?
- Date of npub creation (older is better. Of course, this can be manipulated with “aged npubs” too, but it’s a factor nonetheless to discourage obvious spammers)
- Number of high quality interactions (notes with reactions or zaps)
- Number of zaps on profile
- Total quantity of zaps
- Their NIP5
- Possibly reviews of the user (think community notes on the npub)
- Number of times they were muted
- Posting IP location (just the region not the actual IP)
- Frequency of notes (are they active or mostly dormant?)
- Frequency of reactions (do they interact with others?)
The more criteria we can think of, the harder to manipulate.
This can then all be tallied into a numbered score (maybe %-wise) and color-coded for quick reference to see if this may be a bad actor or someone with a solid history on nostr (whatever this ends up meaning).