would the Nostr community like something on the lines of:

llama 3.1 8B + most of the notes on Nostr

i have to do some WoT, i can't include every note. this will effectively be trainings by Meta and then "adjusted" or "aligned" by nostrich values. if this sounds interesting lmk. i can gift some resources into this project. then one could say this model is the model closest to notes on Nostr.

Reply to this note

Please Login to reply.

Discussion

I was thinking on doing something similar but after initial investigation of fine tuming mechanics and consulting with great AI scientists and ML engs I came to the conclusion it won’t work nearly as good as I imagined and I didnt even start

Would be extremely interesting to see at least a prototype 👍

Interested. What kind of hardware does that take?

I think 1 GPU like RTX 3090 or MI60 could be enough for 8B if you want to do lora or qlora with maybe 64GB ram or less. If you want to do faster, more GPU. More GPU could also allow better learning.

All I did was 4bit qlora of 70B and it seems to work for my purposes, which can be summarized as "aligning a model with human values". I haven't tried full file tuning, freeze fine tuning or 8bit qlora. I bought new cards and will try 8bit qlora soon.