My suggestion is to go for ~low epoch training and iterate your dataset / parameters frequently instead of wasting compute on big training sessions, you will learn so much more and get much better results doing 10x small training sessions than 3x medium for example
Imparting "Knowledge" with LoRa / QLoRa has been challenging IME, unless you have *highly* structured data like Q&A with all of the right prompt template tokens for the given model (e.g. https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_1/)
The effect that LoRa training has is kind of noisy, which is why people say that they like them for image models where the results they're looking for are 'thematic' instead of structural or 'domain knowledge', but they are just as effective at imparting thematic/stylistic 'color' on LLM's in my (limited) experience.
Discussion
No replies yet.