Once more for those in the back:
The fact that LLMs can "solve" most common interview questions is damning of our interview processes, but not a sign that it can take our jobs.
nostr:npub1u27r9h3j9pvrplaffsmpn698e8xhmuqhdgcxldv67ammql9pumnqha3qfq I definitely do. It's one of what I think of as the big existential threats.
Specifically what I am less concerned about is how extensions get used and more that that integration requires Θ(n^2) testing. It encourages lack of interoperability: it's one thing to say "I am doing something deliberately a bit odd," it is another to need to _either_ do a lot of odd things just to talk to anyone else _or_ just say "screw it we're integrating with mastodon and others can figure it out."
If your fundamental claim is: "We just built an API, it is up to others how they use it!"
Then you have:
* Deliberate and direct integrations with a set of highly problematic systems.
* A lot of your advertised use cases are with those problematic systems.
* The main deployment of your API is for said systems.
* Your main use of your system on social media accounts is of said systems.
It is hard to simply wave away your support of the problematic system.
#Haidra
Putting some of my thoughts here with respect to #haidra and #nivenly, which I may formalize later into questions for the discussion:
1. Would Haidra be willing to commit to zero use or advertising of models/workers that are trained on data sourced from copyrighted material that does not include the holder's permission, irrespective of legal fair use qualifiers? (ACM 1.6, 2.8).
2. Has an analysis been done on the environmental impact of #AiHorde? What would this look like? (ACM 1.1, 1.2)
1/
nostr:npub1j43pt6t2armkngn84945s3ns6zl68g9xx3w6jg25snanatz6zs6s40jvze I actually have no idea what instance you are referring to, I can name at least ten where this discussion is happening :p
