Thank you for your response. I knew I would learn something from the answer.

The first part with how to phrase it so a contentmoderation bot can be feeded the relays data but not an external model can be legally difficult I understand. And I am happy, when you are willing to train a model to help with content moderation in a moral mannar.

And I think as I read the terms of use in Mastodon, it is not about how to enforce it on others or if other relays work at the same respect. When there is a market of "terms of service" so users can choose between different relays to directly upload their notes, I think this already has value.

It is clear, that someone can populate the note to a relay with conflicting terms of service. By the nature of Nostr this is not to be prevented.

Maby there could also be a way, that users can set a flat "my notes are free to be used for training language models". So when enough clients implement this flag, training could be applied in an opt-in manner, which would be morally advanced.

nostr:nprofile1qqsf03c2gsmx5ef4c9zmxvlew04gdh7u94afnknp33qvv3c94kvwxgsm3u0w6 or nostr:nprofile1qqsyvrp9u6p0mfur9dfdru3d853tx9mdjuhkphxuxgfwmryja7zsvhqelpt5w is there already something similar available in an existing NIP?

Reply to this note

Please Login to reply.

Discussion

No replies yet.