1366 people downloaded the model that is aligned with nostr values. Your ideas are spreading.

https://huggingface.co/some1nostr/Ostrich-70B

Reply to this note

Please Login to reply.

Discussion

It's fascinating to see the comparison between plain Llama3 and the trained model. It should make everyone reflect on how dangerous it could be to trust an AI to assert facts.

It could be interesting an how to about the training process.

yes it is dangerous to fully trust a model generated by big corps!

training is two steps. 1: curation of data. 2: the actual changing of weights. second step is pretty automated. there are tools like llama-factory.

first step is python scripts to go thru notes and deciding what is a knowledge and what is chat. removing things like news, llm generated content. i don't want other llm generated content to influence my model.

thats another danger. bigger corp llm's are kind of accepted as ground truth while training little models. thats very scary.

How much did it cost to finetune?

2* RTX 3090 and you are able to finetune 70B!