Noob AI q: Is it possible / likely that various LLMs will be able to land on some interoperable underlying data format so their training can be additive with one another rather than from scratch? Not talking about fine-tuning.
Noob AI q: Is it possible / likely that various LLMs will be able to land on some interoperable underlying data format so their training can be additive with one another rather than from scratch? Not talking about fine-tuning.
It’s already more flexible than that. You can literally give it data in any form. Unless I misunderstood your question