Replying to Avatar brittenedborȝ

Imparting "Knowledge" with LoRa / QLoRa has been challenging IME, unless you have *highly* structured data like Q&A with all of the right prompt template tokens for the given model (e.g. https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_1/)

The effect that LoRa training has is kind of noisy, which is why people say that they like them for image models where the results they're looking for are 'thematic' instead of structural or 'domain knowledge', but they are just as effective at imparting thematic/stylistic 'color' on LLM's in my (limited) experience.

Yeah heres the dataset im building:

[Q]show me the latest posts from thomas.[/Q][A]{"kinds":[1],"authors":["thomas"],"limit": 100}[/A]

[Q]show me the latest zaps from Vanessa, bob, and steve[/Q][A]{"kinds":[9735],"#P":["Vanessa","bob","steve"], "limit": 100}[/A]

[Q]top zapped profiles[/Q][A]{"nscript":"top-profile-zaps"}[/A]

[Q]latest posts from thomas.[/Q][A]{"kinds":[1],"authors":["thomas"]}[/A]

[Q]latest articles from alice[/Q][A]{"kinds":[30023],"authors":["alice"], "limit": 100}[/A]

[Q]top zapped articles from bob and alice[/Q][A]{"kinds":[30023],"authors":["bob", "alice"],"nscript":"top-zaps"}[/A]

Reply to this note

Please Login to reply.

Discussion

No replies yet.