that was the strategy of Napoleon :)
that escalated quickly
would you like to hear about an AI + bitcoin + nostr project? promise no *str ending!
nice! humans are complex enough to be able to discover their true self. sometimes i think syntropy towards crystal unity vs entropy towards less evolved creatures.
š¶ it's all for your safety šµ
https://www.dw.com/en/china-calls-for-global-consensus-on-ai-regulation/a-73420599
the bugs eat the unhealthy plants in the garden so you don't have to eat. you should thank them :)
the parasites in your body are cleaning the heavy metals and poisons, so you can live longer. you should thank them :)
everything in the universe conspires to build a better you. the evil disappears when you lose the fear of evil and look at it differently.
there is no evil. it is just your distraction embodied.
gm & pv !
you didn't count herbicides ;)
Iād never eat a well-done steak..
https://blossom.primal.net/fd010a4728d97b1502b9632296757edcc27377b2cd3b3045d07e36223af04009.mov
does that mean raw is more nutritious?
do the cattle owner spray "weed" (dicot) killer on the grass?
it is probably a misunderstanding
that means nostr is more validated
the second probably
makes sense! building great LLMs instead of great libraries š±
Is it actually settling on blockchain?
Benchmarked Kimi K2 LLM. It has done well. DeepSeek V3 beats it but Kimi K2 might be more skilled. Very close performance to Qwen 3 in terms of skills and human alignment. But huge parameter count (1T!).

https://sheet.zoho.com/sheet/open/mz41j09cc640a29ba47729fed784a263c1d08?sheetid=0&range=A3
It is costly to train from scratch. Fine tuning makes more sense for me. Not all the llms are super terrible. Llama models are ranking higher than the rest but for sure they are not the optimal. Generally western models are doing better.






