I've thought about this for a while. Why not just turn hash tags into subreddits. You would need to make a specific client that only show hashtags you follow.
Discussion
Because the main point of Reddit is that there is a moderator deleting stuff that doesn't apply to the community. It's curation at its best.
Some of the best subs are the ones that are very lightly moderated though.
What happens when community tags get orphaned? Can the tag short name (I.e. a/communismbabble) be reused my multiple communities? Since communities could be subscribe-only, and nobody else would see them unless their subscribed, they could be subscribed via their pubkey and the community 'handle' could be used by many groups. This would avoid people squatting on handles, and if someone dies without adding other moderators/owners, the community can simply move to a new pubkey. The only wrinkle is when a client is subscribed to more than one community with the same handle, but you should already have a UX solution to this for npub/username conflicts
Because spammers and people using hashtags incorrectly.
#foodstr
🤣😂🤣😂
Yea... You could filter client side though. But I definitely thought I'd this too. It's certainly a tricky problem to solve.
I'm permabanned from ALL of reddit for just saying "SVB was the biggest collapse since 2008." in a comment reply on the Bitcoin sub. Banned for "harassment " so an alternative would be great.
I was thinking the same thing. In the case of using hashtags, there are is currently no moderation, but some ideas:
1. Any user objecting to an post (e.g., misuse of the hashtag, spammy or inappropriate content) could post a “moderation hint" or "comment" event containing metadata that explains their reason for why it is inappropriate and/or a simple up-vote/flag-inappropriate. Relays and client software could then take these moderation hint comments and regular replies into account and filter the content (this might be the challenging part).
2. An LLM running locally on the client could serve as a user-specific moderator. If the user searches for a specific hashtag, the returned results would be moderated by the LLM before being shown to the user. This would allow each user to customize the kind of content they are shown based on personal preferences. For example, a user could set their client to automatically hide content that contains certain trigger words or topics they find distressing. This could potentially be used in combination with 1.