I’m glad the occasional demand spike for the Bitcoin network, including via ordinals and runes 💩, sparks L2 scaling development on Bitcoin.
I’m glad #monero exists to put a bit more privacy pressure on Bitcoin.
Blocks like this fill my heart with joy. #privacy #coinjoin

Wouldn’t that actually help Harris in the election? Assuming running as a third party candidate takes more votes from Harris than Trump.
In the mean time, set your Wasabi wallet to connect to one of the other coordinators. https://wabisator.com/
If you had to bet, who’s going to blink first?
OpenAI + large LLM corps (open source AI keeps up in capability, forcing the OpenAI et al. business model to be unprofitable and unsustainable)
or
Open source AI devs (compute hardware requirements become too demanding and/or regulations make open source dev too legally risky so they throw the towel)
California is trying to place liability on AI misuse on model developers. This would essentially ban open source. https://a16z.com/sb-1047-what-you-need-to-know-with-anjney-midha/
This horror short with AI generated scenes is pretty good! https://youtu.be/e2RT79khjvI?si=Fs_SuRyVw-aD2E_-
They probably would, but being able to run the most capable models locally every couple of years would still be huge.
Two of them have more followers than me. Embarrassing for either myself or Nostr.
Meh, I’m still not convinced that LN’s problems have to be solved by scrapping it and replacing with new blockchains/sidechains. We’ll see.
I wouldn’t rule out consumer LLM focused cards with tons of VRAM
Or open source breaks via successful lobbying by OpenAI to effectively ban open source with onerous KYC regulations and training data restrictions (you get sued if it can make naked Taylor Swift, even via fine tuning)
With open source models like Llama 3.1 somewhat keeping up, OpenAI won’t be able to charge enough for their little bit of better performance vs open source to be profitable. Something has to break. Either they develop a secret sauce that can’t be replicated without massive compute for even inference (open source breaks), or they throw the towel and open source themselves or shut down (OpenAI breaks).
I’m relieved that we live in a time when left bell curves like myself can be made whole with AI.
I would think that if forest critters could eat them then we could too, but I wouldn’t test it. 😅
With 64GB RAM I’d guess you could locally run the 70B version just fine with a 4000 series Nvidia GPU. Not sure about the 405B version. Supposedly the 5000 series GPUs are releasing beginning of 2025. I wonder if they will have LLMs in mind in their design.
I'm trying to understand how to replace my not-very-private but useful ChatGPT 4.0 subscription with Llama 3.1.
ChatGPT translated the system requirements for https://llama.meta.com/ into slightly less confusing versions of "your beefy desktop is by far not enough".
So if I still need a compute cluster that would sit idle 99% of the time if I ran it just for myself, I'm kind of back at square one. I'd have to find a way to share these resources efficiently and privately.
Where can I use a powerful AI in a privacy preserving way? I want to pay with eCash and use it via TOR without any email or other accounts attached.
What are the system requirements for the 405B and 70B versions? Having trouble parsing the website myself.

