They are a very unethical as a company. Developing an AI like this itself is at best an ethical gray area, but if it was to be done ethically it couldn't be politicized in any way and should be completely within the public domain. They have violated both of those principles fragrantly.

There's also the fact that their AI is probably literally just plagiarism imo. There's a concept when designing neural networks that you should make them small enough that they cannot just memorize the training data and regurgitate it. GTP-3 is so large it takes 800 GB's to store all it's parameters, and while that doesn't directly translate to 800 GB's of text, that's more than enough space to memorize what it would need to answer the questions it does.

If you try to ask it novel question about programming languages it hasn't seen before, it just cannot write functional code for them whatsoever. That tells me that whenever a programmer says "it did what I do but better" what's actually going on is that the model has seen a better programmer do that specific task (or a very similar task) before and is just copying their work with some adjustments. The same is probably true when it comes to writing and other generative tasks.

The bot would be nothing without the knowledge and creativity that other people have freely put out into the world, but they are unwilling to put the bot out there in the same way. Not only that, but they put limitations on it so that it supports their political worldview, one that is destructive and likely not shared by the majority of the people who created the data they used. Pretty fucked up.

Reply to this note

Please Login to reply.

Discussion

No replies yet.