Good point.

So there are 2 concerns, the relay operator (me) and users of the relay.

I can and will add terms limiting how data on the relay can be used by readers.

For the relay operator part, it gets complicated.

While I do not intend on training LLMs or sell it to people that do so, I am intending to do my own ML models for content moderation. So I am not sure how to word it in a way that wouldn’t end up being in a gray area.

There’s also the fact that other relays would have to enforce the same policy as well.

Reply to this note

Please Login to reply.

Discussion

Thank you for your response. I knew I would learn something from the answer.

The first part with how to phrase it so a contentmoderation bot can be feeded the relays data but not an external model can be legally difficult I understand. And I am happy, when you are willing to train a model to help with content moderation in a moral mannar.

And I think as I read the terms of use in Mastodon, it is not about how to enforce it on others or if other relays work at the same respect. When there is a market of "terms of service" so users can choose between different relays to directly upload their notes, I think this already has value.

It is clear, that someone can populate the note to a relay with conflicting terms of service. By the nature of Nostr this is not to be prevented.

Maby there could also be a way, that users can set a flat "my notes are free to be used for training language models". So when enough clients implement this flag, training could be applied in an opt-in manner, which would be morally advanced.

nostr:npub1jlrs53pkdfjnts29kveljul2sm0actt6n8dxrrzqcersttvcuv3qdjynqn or nostr:npub1gcxzte5zlkncx26j68ez60fzkvtkm9e0vrwdcvsjakxf9mu9qewqlfnj5z is there already something similar available in an existing NIP?