This year we have the biggest match challenge we've ever had at nostr:nprofile1qy2hwumn8ghj7un9d3shjtnddaehgu3wwp6kyqpqc2xe9fut7pgcs3yjd29ky35jmczcyuaksxmthzu98qenkd4n7xgst6x4va - over $200k will be x2! Please donate! And if you usually give at the end of the year, you're awesome - could you give now instead to help us get some momentum?
nostr:npub1te88hqz88e6dwu5tskrptdazrvhvf6xdysyawye5j9xnewuwqfksn8u0yq
More to the point, IMO, is that there is nothing legally stopping them from lying and in fact using user input to train their LLM's. How would we know? And what could we do about it after the fact anyway? Without strict legal prohibitions on data collection, combined with guillotine-level enforcement, they are under no obligation to not lie, if lying suits them.
It's like they could replace the whole TOS with "We'll do what we want. Suck it up, loser."
None of these companies should be trusted, ever.
nostr:npub145umrlfuf9cppkgkzf08438peevqs8lqrswsu27w3txkeh4n2yysathq7w Yes, there are so many reasons not to trust these companies. We need decentralized software freedom respecting services that are not controlled by for profit companies.