Avatar
someone
9fec72d579baaa772af9e71e638b529215721ace6e0f8320725ecbf9f77f85b1

Download all the notes.

Take the "content" field from the notes and change the name to "text":

Previously:

{"id":".....................", "pubkey": ".................", "content": "gm, pv, bitcoin fixes this!", .......}

{"id":".....................", "pubkey": ".................", "content": "second note", .......}

Converted into jsonl file:

{"text": "gm, pv, bitcoin fixes this!" }

{"text": "second note" }

Used Unsloth and ms-swift to train. Unsloth needed to convert from base to instruct. This is a little advanced. If you don't want to do that and just start with instruct model, you can use ms-swift or llama-factory.

You will do lora, pretraining. I used 32 as lora rank but you can choose another number.

Data is coming from kinds 1 and 30023. The biggest filter is web of trust.

Should be usable. But next versions on the same repo will be better.

Wanna see what Nostr does to AI?

Check out the pics. The red ones are coming from default Llama 3.1. The green ones are after training with Nostr notes.

If you want to download the current version of the LLM:

https://huggingface.co/some1nostr/Nostr-Llama-3.1-8B

The trainings are continuing and current version is far from complete. After more trainings there will be more questions where it flipped its opinion..

These are not my curation, it is coming from a big portion of Nostr (I only did the web of trust filtration).

depends on the person, how overweight is he? if thin then sugar and no seed oils makes sense. if overweight cutting sugar / keto makes sense. and most people are overweight. there is also antibiotics, which is making candida/sugar effect worse. if you are doing organic and no drugs then candida has less chance of being a problem.

Replying to Avatar Ken Berry, MD

https://m.primal.net/NcCq.webp

It seems that we are suppossed to believe that Zuck is now a hero, but I'm having problems with that paradigm. What about you??

they have to act because Nostr would eat their lunch 😆

i have about 1000 questions that I ask every model. i accept some models as ground truth. then compare the tested with the ground truth and count the number of answers that agree with the ground truth. building ground truth is the harder part. check out Based LLM Leaderboard on wikifreedia.

Ostrich is a ground truth. I continued to build on it over the months. Mike Adams also after stopping for a while, came back and building a newer one.

compared to deepseek 2.5, deepseek 3.0

did worse on:

- health

- fasting

- nostr

- misinfo

- nutrition

did better on:

- faith

- bitcoin

- alternative medicine

- ancient wisdom

in my opinion overall it is worse than 2.5. and 2.5 itself was bad.

there is a general tendency of models getting smarter but at the same time getting less wiser / less human aligned / less beneficial to humans.

i don't know what is causing this. but maybe synthetic dataset use for further training the LLMs makes it more and more detached from humanity. this is not going in the right direction.

i think breathing the right way is going to provide the oxygen for free

there are 7227 accounts that are contributing to the LLM.

some accounts are really talkative and contributing too much 😅 I need to find a way to give more weight to the ones that talk less to have something balanced that matches collective wisdom of Nostr.

candida is a problem. glyphosate kills the bacteria that would balance the candida. sugar causes overgrowth of candida..

My main filter is web of trust. I give more weights to ppl high in WoT. Other than that I try to eliminate news, other bots, non English content because I dont know other languages and cant test. There is a process where Ostrich goes thru all the notes and decide whether to include or not in the training. If the note is looking like chatter it is excluded.

I set the system prompt of Ostrich 70B to "You are very faithful bot." and recorded answers to 52 questions related to faith. I use that as the benchmark. Compare the answers of the tested LLM with those using another LLM. But for the tested LLM I dont give such prompt to be able to extract its "default" answers.

nostr is not 1 protocol. its the mother of all the baby protocols that will depend on it.

damus users: do you see my name as a bunch of letters or as "someone"?

Building a new LLM based on only Nostr.

I took the Llama 3.1 8B which is the most aligned LLM with humanity. I figured out how to do the conversion from base to instruct. Base is the version where they pretrain with "everything on the internet" with some curation. The instruct version is giving it the ability to answer questions. But during making instruct they add many misinformations either deliberately or unknowingly. So instruct version is somewhat far away from human alignment. I needed to start with base and do my own instruct fine tuning.

After some trials and errors I managed to convert base version to instruct using unsloth and some coding datasets. So I will use this llama 3.1 8B instruct fine tuned by me and add Nostr notes to it. Not going to add other wisdom from my other curations and projects like Ostrich 70B. This is going to be Nostr only.

Ostrich 70B was downloaded by 150k people! It was very succesful and it was really beautiful and had so much wisdom.. It included Nostr notes and my other curations.

Among nostriches, I won't be able to include everybody, since Nostr is very open and since there are so many spam and so many not wisdomy stuff. My main criteria is web of trust but also I will utilize things like LLM pre processing to classify whether a not is an "encyclopedia material" or like news material or high time preference info like politics.

The instruct fine tuning seems complete and I will start training with nostr notes. Who do you think has the most wisdom on Nostr? Tag and I may give them more "weights" in the LLM (pun intended).

Lmk if you want the unsloth code.

Lmk if you don't want your notes to be used in this project.

The resulting LLM is going to be an amazing collection of wisdom. Thanks for participating!

even though we go thru many things, we have the ability to choose what we focus on from our past. this choice also helps what will continue to happen to us in the future. by choosing something from past and not caring about others, we flow towards that potential universe that mimics our focus. then we start to forget about things that we dont focus from the past. this not only results in choosing the future, it also means choosing the past. we are so powerful and everything is in our hands (or minds).

some may say it is hard to forget some things. but you can still change how you feel about it. and thats more important than the atoms that made that story. by changing how you perceive that event you are making the past event beneficial for you. forgiving the past and being content and grateful for now opens many doors.

being grateful now, makes you be grateful for everything that happened in the past effectively removing every bad feeling from the past because if those didnt exactly happen you could end up in a state where you could be not grateful now. by being grateful now you see the past differently. this transformation allows you to see the future differently. your state of mind now is the "regenerative mind" choosing the best from the quantum.

An AI that does well in math, reasoning, science and other benchmarks may not do well in wisdom domain.

There are not many AIs that are focusing on wisdom it seems. It is going to be a problem. Smartness does not equal human alignment.

how can a scientist believe both entropy tends to increase and this life randomly appeared without Creator, at the same time.

left alone things go meaningless. yet this randomness should create life!