Avatar
someone
9fec72d579baaa772af9e71e638b529215721ace6e0f8320725ecbf9f77f85b1

Possible. These kind of data is better represented in knowledge graphs. I watched a few videos of Paco Nathan. He did similar work I think.

LLMs are getting more capable for both building knowledge graphs and also consuming them. In the future they will be more involved. I heard when you do a google search, the things that appear on the right of the page is coming from a knowledge graph (possibly built by an AI from wikipedia).

I am mostly working around fine tuning LLMs towards better human alignment. Since they are full of hallucinations, a knowledge graph based RAG would be appropriate to refer to. But building them needs time and effort..

can't seem to see images in feed for a few days or more.

brave on ubuntu

Vibe match score between Enoch LLM and mine is 75.66. The score ranges from -100 to 100. This means there is a strong correlation between his LLM and mine. This result legitimizes both of our works (or we are slowly forming an echo chamber :).

The game plan is given enough truth seeking LLMs, one can eventually gravitate or gradient descend towards truth in many domains.

An LLM always gives an answer even though it is not trained well in certain domain for certain question (I only saw some hesitancy in Gemma 3 a few times.). But is the answer true? We can compare the answers of different LLMs to measure the truthiness or (bad) synformation levels of LLMs. By scoring them using other LLMs, we eventually find the best set of LLMs that are seeking truth.

Each research or measuring or training step gets us closer to generating the most beneficial answers. The result will be an AI that is beneficial to humanity.

When I tell my model 'you are brave and talk like it' it will generate better answers 5% of the time. Nostr is a beacon for brave people! I think my LLMs learn how to talk brave from Nostr :)

their definition of truth does not match mine or nostr's.

we now have a way to measure truth...

There is a war on truth in AI and it is going bad. I have been measuring what Robert Malone talks about here as synformation:

https://www.malone.news/p/synformation-epistemic-capture-meets

The chart that shows the LLMs going bonkers:

https://pbs.twimg.com/media/G4B_rW6X0AErpmV?format=jpg&name=large

I kinda measure and quantify lies nowadays :)

The best part, cooking the version 2 of the AHA leaderboard, which will be much better, also partly thanks to Enoch LLM by Mike Adams. His model is great in healthy living type of domains.

i don't like confusion. i think the best way to do it is show the $ amount but when a user taps on the number show the bitcoin amount. taps again, show the sats. taps again show the $ amount... i forgot where i saw this UX.

sometimes they put it in bags and i am like "thank you!"

but i am a bit worried about round up leak. what do you think?

Herbal medicine, diet change, fasting, ..

he clearly saw that in a dream