Researchers warn that AI models trained on AI-generated data could spiral into unintelligible nonsense, making them less useful and filling the internet with incomprehensible babble. A study published in Nature found that as AI systems rely more heavily on their own output for training, they may create self-damaging feedback loops, leading to model collapse. This issue can be addressed by taking care when designing synthetic data and ensuring models are built to improve over time.
Discussion
No replies yet.