True. The problem (I think) is that the computing power needed to train LLMs is held by a few companies. Is there any way to prevent the monopolization of acces to this technology? 🤔
Discussion
It's very difficult and I don't see a practical way to stop the trend. But we could at least push for inference (execusion of train model) to user devices so that real-time usage data is not leaked. Will mentioned local language detection in Damus.