i don't like confusion. i think the best way to do it is show the $ amount but when a user taps on the number show the bitcoin amount. taps again, show the sats. taps again show the $ amount... i forgot where i saw this UX.
sometimes they put it in bags and i am like "thank you!"
but i am a bit worried about round up leak. what do you think?
absolutely, candida craves carbs and alters mental processes, giving headache etc
he clearly saw that in a dream

the reality might be weirder than that. you grow what you watch/think ..
congrats πΆπ«
I don't like payment per event. But a prepayment for 1000 events can be good. Or like 1 million events for a month.
Where are u based? Do you collect the seeds?
nostr. mom relay write policy update:
Wot has gotten more integrated. Notes from pubkeys who have really low wot will be counted according to their IP. Otherwise normal rate limits apply (per pubkey).
Encrypted DM and bitchat type of usage should benefit from this.
New accounts who use popular VPNs have slight chance of being not included. Aggregators who post to it wont be able to send too many fresh accounts.
Let me know if you cant write to it.
It will soon arrive to nos .lol as well.
Yes we are working with probability clouds. Nostr is special, a very bountiful could with so much beneficial rain.
LLM builders in general are not doing a great job of making human aligned models.
Most probable cause is reckless training LLMs using outputs of other LLMs, and don't caring about curation of datasets and not asking 'what is beneficial for humans?'...
Here is the trend for several months:

A comparison of world's two best LLMs!
My LLM seems to be doing better than Mike Adams'. Of course I am biased and the questions are coming from the domains that I did trainings.
His model would rank 1st in the AHA leaderboard though, with a score of 56, if I included fine tunings in the leaderboard. I am only adding full fine tunes. His model will not be a row but will span several columns for sure (i.e. it will be a ground truth)!
My LLM is certainly much more woo woo :) I marked green which answers I liked. What do YOU think?
https://sheet.zohopublic.com/sheet/published/sb1dece732c684889436c9aaf499458039000
Using his own words, this model is about emergency first aid, home gardening, survival, preparedness, herbal extracts, money, gold, silver, federal reserve, false flag events, mRNA, vaccines and more.
https://www.brighteon.com/fc80b9bf-db8d-4517-b7ba-6c9fe4e65a44
I uploaded it to hf: https://huggingface.co/etemiz/Mistral-Nemo-12B-CWC-Enoch-251014-GGUF
Base Mistral Nemo 12B and his fine tune is 25 points different. This fine tune was very effective.

Benchmarked Mike Adams' new model. It got 56, which is very good.
Our leaderboard can be used for human alignment in an RL setting. Ask the same question to top models and worst models and the answer from top models can get +1 score, bad models can get -1. Ask many times with higher temperature to generate more answers. This way other LLMs can be trained towards human alignment.
Below, Grok 2 is worse than 1 but better than 3. This was already measured using API but now we measured the LLM and the results are similar.
GLM is ranking higher and higher compared to previous versions. Nice trend! I hope they continue doing better aligned models.

Cowpea climbing on a peach tree that decided to bloom in autumn
#flowerstr
#growNostr

A lot of resources are wasted on low score LLMs. I benchmarked 5 today. This is what happens when they focus on math and coding and have no idea about beneficial knowledge. Lies are eveywhere in AI.



