Wanna see what Nostr does to AI?

Check out the pics. The red ones are coming from default Llama 3.1. The green ones are after training with Nostr notes.

If you want to download the current version of the LLM:

https://huggingface.co/some1nostr/Nostr-Llama-3.1-8B

The trainings are continuing and current version is far from complete. After more trainings there will be more questions where it flipped its opinion..

These are not my curation, it is coming from a big portion of Nostr (I only did the web of trust filtration).

Reply to this note

Please Login to reply.

Discussion

Yeah, not very surprising to find pretty homogenous group think when training on a small population data set

This is what I came to say. LLMs parrot the most likely answer based on the data set they have been trained on.

THIS IS AMAZING 🤩

I’d vote for this AI overlord lol.

Wow. Is the model usable at this time? (can I plug in a hugging face endpoint and go?)

Should be usable. But next versions on the same repo will be better.

Interesting, how do you prepare the data?

Data is coming from kinds 1 and 30023. The biggest filter is web of trust.

Can you ask about geoengenerring? Please

Can you write the exact question?

What’s the purpose of geoengineering? (Chemtrails and HAARP)

It's always curious when that which is created rules out creation as a possible source in other contexts...

What do you mean?

Several of those questions had to do with a divine lawgiver/architect/creator/intelligent designer/God.

The first A.I. source, itself a created thing, often rules out a creator as an explanation for the existence of other things, which I find interesting 😏

I never see notes like that on nostr! Where does this stuff gets noted???

Maybe there are only conspiracy nuts in his WoT?

my wot starts with a few guys plus me with highest scores and who they follow gets lower score, who they follow gets lower etc recursively. simply math, nothing complicated.

Did you use a system prompt in these examples?

Yes.

Isn’t it better to use an uncensored base model for the training? Will you opensource the dataset?

I started with the Llama 3.1 Base!

The dataset is on relays, most relays should allow downloading ?

Oh, I see. By dataset I was thinking of the [WoT filtered] raw data after cleaning/curation and post-processing.

What is your method & tool for fine-tuning this model(s)?

I've been desiring to train some LLM's on specific datasets and seeking a method(s)/tool(s) to do so best fit for me

Second question; what is your dataset structure? I understand kind 1 & other events, but how is it structured when feeding the LLM? Just JSON? Anything else I'm missing to train & fine-tune my own LLM?

If you don’t mind me giving you a suggestion. An easy way to get started is by using Unsloth’s Google Colab notebooks. Just by inspecting the code of some of their many notebooks you can get a solid starting point about the fine-tunneling steps, including the dataset formats. https://unsloth.ai

Thank you I'll give this a test

I see this is for smaller models. Can I use this as well for ~100B parameter LLM's?

Would prefer to do locally if I can; I do have access to hardware to do this

Yes, you can. These notebooks use smaller models only to take advantage of the Tesla T4 (free tier). You can mod the notebook and use it locally. You can use their bigger models or any other that you want when you feel more comfortable with the different model templates. https://docs.unsloth.ai/get-started/all-our-models

Thank you for your follow up answers; much appreciated🦾

Download all the notes.

Take the "content" field from the notes and change the name to "text":

Previously:

{"id":".....................", "pubkey": ".................", "content": "gm, pv, bitcoin fixes this!", .......}

{"id":".....................", "pubkey": ".................", "content": "second note", .......}

Converted into jsonl file:

{"text": "gm, pv, bitcoin fixes this!" }

{"text": "second note" }

Used Unsloth and ms-swift to train. Unsloth needed to convert from base to instruct. This is a little advanced. If you don't want to do that and just start with instruct model, you can use ms-swift or llama-factory.

You will do lora, pretraining. I used 32 as lora rank but you can choose another number.

Excellent I figured that was structure. Thank you for the detailed information

Oh no… it makes it stupid!

My thought exactly. This made me question whether I want to stay on nostr... Wouldn't want this to happen to me.

Guys, do you really care about it? Really? It’s like watching a drone show and ignoring the danger of it.

I don't really mean that I will leave nostr due to something like this. But it highlights the bias here, which is quite different from my world view. Or the bias of the authors WoT...

I don't know what you expect; but this doesn't seem surprising.

These LLMs simply digest and regurgitate "likely" word patterns. If you feed it "data" (nostr notes, in this case) from any group with a bias, you're going to get the boiled down version -- a summary, if you will -- of those biases.

Given that nostr notes are generated by people who generally have a higher distrust of "The Narrative" as presented by governments, main-stream media, etc., you're going to see that reflected in the output of an LLM trained with that data.

The mere fact that many of LLM's responses to the faith-related questions start with "I believe ..." is enough to make me question the validity of the model as a source of unbiased output. And I'm fully in the "There is a God" and "God has a plan for us" camp.

No I did not add faith bias.

For who wants to try it on Ollama, i uploaded it.

https://ollama.com/mroxso/Nostr-Llama-3.1

If you have Ollama installed, you can try it by `ollama pull mroxso/Nostr-Llama-3.1`

So it just says yes to everything?

😅

i guess all those gm and pv make a positive impact 😆