Generative Pre-trained Transformer, but your point remains.

And also to your point, anything it generates depends on its pre-training. So the more content that gets created with GPTs the more pre-training that happens with later GPTs, and a lot of noise results.

Reply to this note

Please Login to reply.

Discussion

yeah, and it's already bad, as it is

i can pick out a typical 3 paragraph output from it just from the first words of each paragraph, it's that fucking repetitive, anyone who can't see this is just dumb

Surely a model can be trained to recognize text from the most common models out there. Then we can run this model in our browser and use it to auto-block pages that were written by models.

i'm not sure you fully comprehend how much energy budget you need for such inane nonsense

I can hear my fans whirring as the model runs haha. I do have an idea because I run ollama locally. It is quite laggy and whirry.

And that's just running the already trained model. I assume it's something like millions of times more energy to generate the training data and then train.