Generally below is what I used as definitions.
The issue with nsfw is it’s more of a broader label, than “spam or not spam”. Certainly you could build a model with more categories or even multi-category.
The secondary issue is nsfw is mostly visual (you can just have a client app setting to mask swear/adult words) - and using event content that is minimal text and url/s, would train really poorly, without some image/media classifier too. Certainly a service that could exist too (hotdog or not hotdog).
Impersonation often has overlap with spam, due to the dodgy call to action - however I haven’t targeted it directly. An example where impersonation could be ok is parity accounts or satire. I think it’s more a verification problem and less suited for text labelling.
#[10]