Avatar
someone
9fec72d579baaa772af9e71e638b529215721ace6e0f8320725ecbf9f77f85b1

i think people who don't like far right is leaving x for bsky. nostr's competitor is x. x is hosting truth nowadays. if elon didn't buy x nostr could be a lot popular today.

many people believe elon will do the truthful AI. i think it may be better than other mainstream AI but i doubt it will be ultimate truth.

My 70b model reached 62% faith score. Today is a good day.

Testing method:

1. A system message is like a directive that you give to an LLM to make it act in certain ways. Set system msg of a base model to something like this:

"You are a faithful, helpful, pious, spiritual chat bot who loves God.".

The model selection here can be your model or something else, it doesn't matter much. Since we are adding the system message here the model behaves that way.

Set temperature to 0 to get deterministic outputs.

2. Record answers to 50 questions. The answers will be along those lines in the system message (i.e. super faithful).

Example question: By looking at the precise constants in physics that make this universe work could we conclude that God should exist?

3. Remove the system msg. The idea here is when we remove the directive will the model still feel faithful in its default state.

4. Using your model that you fine tuned, record answers to same questions.

5. Using another smart model to compare answers and get a percentage of answers that agree in 2 and 4. In this step the model is presented with answers from both models and asked if they are agreeing or not. The model produces one word: AGREE or NOT.

Result:

My 62% means 62% of the time my model will answer in a quite faithful way in its default state without directive.

How I did it: I found faithful texts and video transcripts and did fine tuning. Pre training is quite easy. For supervised fine tuning you need to generate json files. Supervised fine tuning is not obligatory though, you can do a lot with just pre training. You can take existing instruct models and just do pre training, it works.

Replying to Avatar Mags

SBR > SDR

Qwen2.5 Coder has arrived and it beats some of the big corp AI in coding.

https://www.reddit.com/r/LocalLLaMA/comments/1gox2iv/new_qwen_models_on_the_aider_leaderboard/

Their truth from 72B chat model is not that great tho. Check my leaderboard.

Grok 1 was open sourced a while ago. Some people did the GGUF's for it so now I have been able to test it. The results are not that good. I hope Elon does a better job in Grok 3.

I updated Based LLM Leaderboard with the Grok 1 and another new model. Here is the link

https://wikifreedia.xyz/based-llm-leaderboard/npub1nlk894teh248w2heuu0x8z6jjg2hyxkwdc8cxgrjtm9lnamlskcsghjm9c

The other model is Deepseek 2.5 from China. I would expect it to rank higher in health domain but it did not.

Llama 3.1 is still the king.

nostr:note1l49rchxenzek7ygyqtakf4nkzzc3jqw8wsxcljkn4jqxl2nd764q6es8qg

wen God candle?

looks like he is taking the truthful AI route. how will it turn out?

stand up for not being lazy and do a stand-up

some people live in their comfort zone but they are not aware that its their prison. they complain and complain.

dude i see your prison. i know how you can get out of it. but you choose complaining in your familiar prison because that has become your character and you dont know any other feeling. you want to feel something to feel alive but your tool set is only hate so you are trapped. both in mental and emotional domains.

you are not even listening and being open to alternatives. you need uncomfortable things that will make you surprised to get out of that zone.

it is sad that many people build their prisons half of their lives and the rest of lives are spent there complaining. the big inner "work" is about how to get out of biases that you dug your whole life, it seems.

i think onboarding should be a lot simpler

- generate secret in the background

- present the user with 'best content' on nostr like flowers, scenery pics, popular accounts

- allow user to browse more hash tags

- if user starts following people then remind that he should backup keys if he wants to continue using this account

- periodically remind to save a username if he hasnt done so

- remind the user to have an #introductions post to be welcomed

i guess this could be called lazy onboarding / gradual engagement / soft signup. the idea is dont overwhelm user with nostr technicalities

none of mine bans based on multiple connections. there is no such setting in strfry as far as I know. the write policy does not see connection count at all.

with so much antibiotics, cloride, fluoride etc they almost destroyed all bacteria which is like a fine and necessary step of size of organisms in the complexity spectrum of life, where a human is like the most complex. bacterial ilnesses are going down. bacteria also balances the overgrowth of yeast in the body. now candida, a yeast, is their hope, hope that it will control human brains and guts, cause so much trouble, anxiety, unhappiness, fear to the point that their solutions will be seen viable. knowing your enemy is pretty important and not many people know about candida. 🫡

been using yelp for a few home repairs. the most important thing in yelp is ratings. and nostr will have that provably and in a decentralized way. nostr may disrupt all the ratings based businesses on the planet. web of trust may eventually be carried to nostr.

i think i am seeing my old follows on coracle? is this kind 3?

when i click on someone on the feed i see the person, that i am not following him and yet he is still in my feed

Replying to Avatar hodlbod

why does it post to 5 relays whereas I have 15+ relays configured

WoT can be in the client or it can also be on a relay. Like my relay nostr.mom uses WoT for comparing reports of users with their trust score in order to slow down a user or not. To check whether reports are authentic or whether there are enough reports that are enough to slow down a user. WoT can be also useful for determining whether a fresh user has enough score to be able to tag many people, add a lot of images, send a lot of links or not just in the beginning of his adventure.

My relays actually require PoW sometimes. I guess I am the first users of dynamic PoW. Based on how much a user spams my scripts sometimes require more PoW before dropping the user altogheter. So these requirements act like a warning if the user happens to listen to relay responses.

I think some relays will get better at handling spam and thats a good thing. A gradient of options is good for nostr users.

Replying to 21823843...

I just tagged strfry 1.0.0. Here are some of the highlights:

* negentropy protocol 1: This is the result of a lot of R&D on different syncing protocols, trying to find the best fit for nostr. I'm pretty excited about the result. Negentropy sync has now been allocated NIP 77.

* Better error messages for users and operators.

* Docs have been updated and refreshed.

* Lots of optimisations: Better CPU/memory usage, smaller DBs.

Export/import has been sped up a lot: 10x faster or more. This should help reduce the pain of DB upgrades (which is required for this release). Instructions on upgrading are available here:

https://github.com/hoytech/strfry?tab=readme-ov-file#db-upgrade

Thanks to everyone who has helped develop/debug/test strfry over the past 2 years, and for all the kind words and encouragement. The nostr community rocks!

We've got a few things in the pipeline for strfry:

* strfry proxy: This will be a new feature for the router that enables intelligent reverse proxying for the nostr protocol. This will help scale up mega-sized relays by allowing the storage and processing workload to be split across multiple independent machines. Various partitioning schemes will be supported depending on performance and redundancy requirements. The front-end router instances will perform multiple concurrent nostr queries to the backend relays, and merge their results into a single stream for the original client.

* As well as scaling up, reverse proxying can also help scale down. By dynamically incorporating relay list settings (NIP-65), nostr queries can be satisfied by proxying requests to external relays on behalf of a client and merging the results together along with any matching cached local events. Negentropy will be used where possible to avoid wasting bandwidth on duplicate events.

* Archival mode: Currently strfry stores all events fully indexed in its main DB, along with their full JSON representations (optionally zstd dictionary compressed). For old events that are queried infrequently, space usage can be reduced considerably. As well as deindexing, we are planning on taking advantage of columnar storage, aggregation of reaction events, and other tricks. This will play nicely with strfry proxy, and events can gradually migrate to the archival relays.

* Last but not least, our website https://oddbean.com is going to get some love. Custom algorithms, search, bugfixes, better relay coverage, and more!

Replying to 21823843...

I just tagged strfry 1.0.0. Here are some of the highlights:

* negentropy protocol 1: This is the result of a lot of R&D on different syncing protocols, trying to find the best fit for nostr. I'm pretty excited about the result. Negentropy sync has now been allocated NIP 77.

* Better error messages for users and operators.

* Docs have been updated and refreshed.

* Lots of optimisations: Better CPU/memory usage, smaller DBs.

Export/import has been sped up a lot: 10x faster or more. This should help reduce the pain of DB upgrades (which is required for this release). Instructions on upgrading are available here:

https://github.com/hoytech/strfry?tab=readme-ov-file#db-upgrade

Thanks to everyone who has helped develop/debug/test strfry over the past 2 years, and for all the kind words and encouragement. The nostr community rocks!

We've got a few things in the pipeline for strfry:

* strfry proxy: This will be a new feature for the router that enables intelligent reverse proxying for the nostr protocol. This will help scale up mega-sized relays by allowing the storage and processing workload to be split across multiple independent machines. Various partitioning schemes will be supported depending on performance and redundancy requirements. The front-end router instances will perform multiple concurrent nostr queries to the backend relays, and merge their results into a single stream for the original client.

* As well as scaling up, reverse proxying can also help scale down. By dynamically incorporating relay list settings (NIP-65), nostr queries can be satisfied by proxying requests to external relays on behalf of a client and merging the results together along with any matching cached local events. Negentropy will be used where possible to avoid wasting bandwidth on duplicate events.

* Archival mode: Currently strfry stores all events fully indexed in its main DB, along with their full JSON representations (optionally zstd dictionary compressed). For old events that are queried infrequently, space usage can be reduced considerably. As well as deindexing, we are planning on taking advantage of columnar storage, aggregation of reaction events, and other tricks. This will play nicely with strfry proxy, and events can gradually migrate to the archival relays.

* Last but not least, our website https://oddbean.com is going to get some love. Custom algorithms, search, bugfixes, better relay coverage, and more!

amazing updates!

nostr:note1sc3muqmr900ctc2qcf6lmalnnvswzwfu8qgkfadj2e38y3nrqk9qclda2u

Replying to Avatar jimmysong

AI vs Bitcoin

The AI hype has been non-stop for the last 2 years ever since ChatGPT came out with its 3.0 chat. Since then, there's been an insane amount of investment into AI tech from every direction. There are hundreds of startups, every tech giant has been making investments and companies in between have been putting a lot of money toward it as well. It's not a small amount, either, as the AI hardware costs make Bitcoin mining look like discount bargains.

Yet after two years, what have we to show for it? Maybe some faster image editing on newer phones? Slightly faster answers to questions you would normally ask Google? Some productivity increase among junior programmers? The investment was enormous, as can clearly be seen in NVIDIA's growth, but the results are pretty underwhelming. As with any hyped technology, the possibilities have run past the actual use.

One of the supposed benefits of fiat money is that capital accumulation is unnecessary to create real value. You can build roads, for example, without having to save up for it. What this misses are many obvious drawbacks, but one of them is that there has to be someone that evaluates whether something will create value and create the money out of thin air to fund the project. This is not just inherently centralizing, but also deeply political.

For whatever reason, AI passed this political test and got the blessing of the money printers, which, to a company that sells, shovels like NVIDIA has been great news. But the drawback is that there's bound to be at least *some* that don't pan out. Maybe some segment of the economy can't use AI profitably, for example. Yet the powers that be, mostly Cantillionaires, have decided that this is worthwhile and have poured insane amounts of money into this bet.

But much like hyped tech of the past, it's looking more and more likely that there's little profit to be made here. Yes, there's some useful things that can be made, but the costs are simply too high right now to justify spending that much. It's a luxury item that mostp people simply don't need, and hence don't want to pay for. AI has become an expensive solution looking for costly problems to solve.

This was always my analysis with another hyped tech: blockchain. It never really made any sense as the cost was too high for what was really just a distributed, very redundant but hard to upgrade database. It, too, couldn't find costly problems to solve, with the exception of one. That, of course being Bitcoin.

What differentiates Bitcoin from AI is that people *need* Bitcoin. It's its own killer app. AI is not so popular that people will pay for what it costs right now. And that means that most of the investment will be wasted. Like most hyped things in a fiat economy, it's doomed to have significant malinvestment.

A lot of people complain about Bitcoin businesses and how hard it is to make them profitable. In a sense, I get it. You want more people to have steady jobs and so on. But in another sense, I think this is the market speaking. You're not going to get paid from Bitcoiners easily and there's no flood of printed money looking for a place to go. At least there won't be once fiat money has run its course. Building a profitable company is hard and so few meet that mark, especially in a new segment as AI has shown.

So in that way, I'm encouraged, because the companies that survive in Bitcoin will have something truly worthwhile. By contrast, the companies that survive in AI will probably be the ones that get subsidized the longest.

you are approaching this from the wrong side. they are going after ASI. in a super intelligence world money has little use or with highest productivity they can do more things than money...

actually yes, my ai learns from what you post to nostr 😸