Avatar
florian
b7c6f6915cfa9a62fff6a1f02604de88c23c6c6c6d1b8f62c7cc10749f307e81
building https://nostu.be and https://slidestr.net

Ohh you already stopped by the stadium?

The basic idea is that we will need a custodial solution because we may not be able to scale lightning to 8 billion people. Using these e-cash “banks” is currently our best approach. The protocol is much more efficient und quicker than lightning. It has the advantage that the custodians (mints) can not identify who is paying, I.e. they can’t censor a single user. In that respect it is more private than custodial lighting. Still, there remains a risk of rugpulls since the underlying lightning bitcoin is held centrally as with any custodial solution.

Replying to Avatar Derek Ross

nostr:npub1h493d0qgwhu95s82zd9sxrt3ckn3ttgvaf04z02neckadxw5fkvqte4cwz is just a week away. im heading out on monday. im getting all sorts of excited and anxious. good times are ahead. i can't wait to hangout with my nostriches again and eat great food together, share some adult beverages, and have the time of our lives. i want to tag a lot of people and make a second mini hellthread of the day, but ill save the tagging for IRL hugs in a few days.

Flying over on sunday already. Looking forward to meet everyone as well.

"I took the red pills cause that's what's fighting for my freedoms" ... he needs the orange pills!

https://www.youtube.com/watch?v=5h_gJlPmtrI

Replying to Avatar StarBuilder

One of the reasons I prefer to write code for computer programs rather than interacting with people is because computer programs have deterministic outputs. A person can say "yes" to something you need them to do but then do something completely different because they believe that is the right thing to do. Or they may fabricate some facts to convince themselves that is the right thing to do. You need to pick up on subtle signals to determine if they are going to do what you expect or if they are going to do something different. With computer programs, the output is always binary. Either it works or it fails. When it works, I understand why it worked, and when it fails, I can figure out why and fix it. However, this logic does not apply when it comes to pretraining and fine-tuning large language models (LLMs).

We saw this week with Google's Gemini. LLMs are first trained using the available public and private datasets from the internet. The first step is to use raw, unfiltered datasets that go into these models. More data is better than good data. If you equate pretraining LLMs to manufacturing a large piece of equipment, the raw materials often go through multiple layers of preprocessing before the material gets refined. However, imagine if the manufacturer decided it was not worth refining the raw materials, so they fed all the crap and junk into the manufacturing process. While the pretrained output from an LLM is a reflection of the data fed into it during pretraining, if you don't like the output, you can't really blame the model.

For example, in the case of Gemini, Google most likely did not anticipate or intend the historical-figures-who-should-reasonably-be-white result. So they injected their own version to mellow it down. Now the model took that input literally and started to apply it on its own.

The important thing is how one of the largest and most capable AI organizations in the world tried to instruct its LLM to do something, and got a totally bonkers result they couldn't anticipate. This event is significant because it is a major demonstration of someone giving a LLM a set of instructions, and the results being totally not at all what they predicted.

**As Emad Mostaque from Stability AI puts it, "Not your models, not your mind."**

While, Google and OpenAI are advocating politicians to make some regulations to stop the "bad" actors, they very well know it is to protect themselves than anything else.

I would clearly stay away from using Google Gemini or any closed source large language models from big companies and start to look at open source models, set up your own infrastructure for fine-tuning and inference. It is better to put efforts around building our own predictable open source Large languages model (LLMs) as a community and contribute to building our society rather than trusting these idiots.

What are your thoughts? Happy Friday and welcome to another crazy week in AI...!

I have used the Text2Speech bot to turn your article into audio. The volume seems to drop at 1:30 though ...

https://media.nostr.build/av/c1234b7ded615dacb4080b9c3baeef7f090687d6df7fdfaabb6f07494d241090.mp3

Should be the default!

nostr:note1cnkc709dw9ssf77az7ccuvlzq2fz73nf2du5eclk5ymupurwjyeq2ksz90

Depends … would probably need to leave my apartment for that 😂

nostr:npub1mz3vx0ew9le6n48l9f2e8u745k0fzel6thksv0gwfxy3wanprcxq79mymx voice quality is awesome. Will test this on some more posts.

nostr:npub1ye5ptcxfyyxl5vjvdjar2ua3f0hynkjzpx552mu5snj3qmx5pzjscpknpr I had selected "English" in the dropdown (Default was German from my Browser settings) but the DVM Reuqest showed "Deutsch (German)":

The voice output was still in perfect english. Not sure if it is an actual issue - just confusing.

Also: It might be good to still show the full posts if the user wants to read while listening or look at images. Currently you only see the audio player.

Patreon has done a great job improved their app user experience in the last year. It now provides a coherent community for non-subscribers as well as subscribers and that makes it easy to start following and subscribe at a later time.

We can do the same, combining the following tier (free) with subscription tiers for a seamless community experience.

„When I find myself in times of trouble“… let it #Bitcoin

nostr:note18xjsp29724l0fg4znrlen2t8cr9n8veya3xw4fr0hxpt389jvr9qmn76dm