A study finds that AI trained on its own outputs will become increasingly prone toward errors, until the model collapses.

As many artists and other observers have predicted, the use of non-realistic reference images as AI inputs will deteriorate the quality of reality-based AI art content over time.

When an AI model is fed with AI outputs instead of real world references, the breakdown or corruption becomes inevitable over time.

https://www.nature.com/articles/s41586-024-07566-y

#AI #Artificial #Study #Research #Art

Reply to this note

Please Login to reply.

Discussion

One prediction from this finding is that AI models will be dependent on a correct classification of human vs generated art.

If an AI model uses more than a certain ratio of AI outputs for its training data, we can expect the model to gradually become corrupt.

The logical conclusion is that a proper categorization that separates human art from AI art will be necessary for AI models to not veer toward visual corruption.

We can expect the same principle in regards to poetry, writing, programming and every other field of activity

Not just art, right? Call me unsophisticated but isn’t most AI just a really good search engine and compiler? If so and the data drawn from (the internet) is increasingly AI, who does the thinking and creating? It starts building on top of sand.