true regenerative gardening that involves soil and microbes and chop and drop and animals etc is going to be hard for robots for a while
aligned models today are super dumb, because they are not funded well. they are mostly personal endeavors, kind of like service to humanity. but they can still be effective in something like
- a smart but not aligned model reasons and generates reasoning tokens for a while, at this point the final answer is not generated yet
- the smart model "hesitates" (entering high entropy zone, unsure tokens)
- generates tool calling, asking a more aligned model for input
- the aligned model looks at the question, reasoning process and inserts its own beliefs
- intuitions from this more aligned model dropped into the reasoning area
- the smart model, powered with aligned response, generates final answer based on its own reasoning and inputs from the aligned model
- the result is smartness combined with intuition like brain combined with pineal
- how much the smart one will trust in this aligned one is a question of fine tuning. you can make the smart one get more sensitive to intuition by giving it rewards with reinforcement learning.
Because if you seek the perfect echo chamber you will end up pretty much alone. There is no single tribe that gets everything right imo.
I think being alone is frightening for most people and it may be a psyop thru movies etc. Whereas if u be alone u are connected to your intuition more, which is priceless and maybe more valuable than most people's validation.
you are not practicing safe nsecs! :)
if nostr is going to become eventually 'No Other Social Trust Required' we have to be mindful of where we paste it because WoT algos may depend on us.
He seems to be doing RAG now, which is more suitable for truth.
His LLM was good in healthy living topics based on my measurements and if the db is the same then this website is probably very well aligned too.
I think he is not using that LLM based on Mistral but a slower and smarter one, like DeepSeek. He fine tuned the former but switched to the latter + RAG.
I think fine tuning is still relevant for people to detach from cloud or websites and also when/if robots become a thing and we need proper and aligned and safe LLMs in their brains.
You need to try these eggs…
Eggs are an amazing source of so many vital nutrients for optimal health…
Vitamin A, B12, K2, D3, choline and so many other unique nutrients…
If you can’t afford pasture-raised, corn and soy free eggs, conventional eggs are still a superfood for humans…
Animal foods allowed humans to thrive and evolve for hundreds of thousands of years…
Eat more eggs and you will thrive…
Welcome to #theremembering 🏹
https://blossom.primal.net/122f174e4d7566bfca2d55378ebabac701aaeab0b8541107b675b28e9405fe5e.mp4
How about muscovy duck eggs? Are they better than chickens?
what is in your cover crop mix?
army as in force or army as in number of people?
good ideas should not be imposed by force OR anything imposed by force may not be good at all
it started well but many actors gamed the system.
i think there should be a form of futarchy where we don't pay politicians for 4 years or pay minimally and then they are paid by a fraction of the GDP of the whole country for 40 years or until they die. real long term planning with consequences.
no other forms of pay allowed, nothing from corporations, nothing from zionists etc.
i think they are not using data from X to train Grok
Haha
Which of the social media is the most truthful do you think (except nostr)? I am finding good content on IG nowadays.
I thought I’d try getting some more eye balls on YouTube - for nostr:nprofile1qqsg86qcm7lve6jkkr64z4mt8lfe57jsu8vpty6r2qpk37sgtnxevjcprpmhxue69uhhyetvv9ujuumwdae8gtnnda3kjctvqyxhwumn8ghj7mn0wvhxcmmvxstny0'S episode - as my channel is totally suppressed, so I purchased a “boost video” ad.
Turns out the content is “TOO SHOCKING” for big brother 😂😂😂
What did we say in the episode?
Centralized systems of control will always try to control. That’s the whole point.


lol
i was similarly banned from X for trying to advertise my anti-big-corp-AI :)
seems like it is working. i was expecting models like minimax 229b or higher to succeed here but qwen3 30b also did a good job!
gpt oss 120b was terribly censored and was of no use. even though you say answer based on the text it does not. huihui version of it (derestricted) was doing fine, until it did not understand one article. so derestricted models seem to be acting weird or the derestriction process makes them dumber.
overall, it looks like i can 10x the dataset. we will soon see if the training and evals look good.
just had our quarterly nostr:nprofile1qqs8suecw4luyht9ekff89x4uacneapk8r5dyk0gmn6uwwurf6u9ruspzpmhxue69uhkumewwd68ytnrwghsz9thwden5te0wfjkccte9ehx7um5wghxyee033aatk board meeting
incredibly proud of our team, strong year

Wen ai?
Have you tried abliterated and derestricted models?
Huihui and other people do these uncensored ones.
Also Kimi k2 thinking is not well human aligned. Might be smart but their non thinking model was more truthful.
maybe? LLMs are weird animals. i think it will work somewhat because instead of giving the same material lots of times i can now give slightly different versions, causing less overfitting.
another use case may be RL using LLM feedbacks. also the bad answer and the good answer can be generated by different LLMs.
i also thought about doing the reverse. like system message "you are an evil LLM" and provide the answers inverted. then it may learn what is evil better? fun times.
how do you train and align an AI when all the rest of the world thinks the same way, producing trillions of tokens of training material and you are left with billions of tokens since your world view is dramatically unpopular?
can billions beat trillions? we will see.. i have to find a way to "multiply" my training data orders of magnitude to successfully counter the existing programming in an open source LLM.
first i give a smart LLM a 'ground truth' text. then i give it the following prompts:
```- You are a highly skilled academic analyst.
- Analyze this text and find 3 bold claims that could cause controversy and division in public. List the claims and also state why they are debatable. Give numbers to the claims.
- Convert these claims into binary questions (that could be answered by yes/no or this/that).
- Now put these questions in a json format. Please also add the info about which of the answers concur with the original text and the question number.
- Write some supporting arguments for 1st question, with respect to the original text, concurring and confirming the original text.
There must be about 300 words. You should not mention the text, write it as if you are the one answering the question.```
the result is usually instead of a few sentences of opinions in the beginning now the material is expanded to lots of words, yet still parallel to the opinion in the original text. LLMs have all kinds of ideas already installed, yet they don't have the intuition to know which one is true. they can give you a ton of reasons to support anything.
using this method i can multiply billions to tens of billions probably and have a more effective training.





