How to Learn?
Decentralized Machine Learning does not aim to replicate the paradigm of Large Language Models (LLMs) or, more broadly, deep neural network techniques on a smaller scale. Instead, it introduces new ideas inspired by nature, which has been learning far longer than humans. So, what are these ideas?
Evolution Everywhere: If there is computation in the network, it should be open to evolutionary learning. This could involve neuro-evolution of neural networks, fine-tuning auto-regression models, or exploring open-ended spaces and parameters.
A Network of Modules: Imagine a system where modules of computation mix and match to find solutions. An analogy can be drawn to the human immune system: retain historical learning, but as new data comes in, assess whether existing solutions work. Identify which solutions seem to work best initially and improve upon them to find the optimal answer.
Varying Lengths of Time: Evolution can occur over long periods and in quick bursts of activity. Patterns over time will be modeled and compared across computations.
These three ideas can be applied at both the individual peer level and the networked peer-to-peer scale. Scale networks to the size needed to answer questions or find solutions effectively.