That overlaps with Baltic Honeybadger. People are going to have to make some tough choices.
For the radio phreaks out there. Flood the FM broadcast band with your choice of music: https://www.scd31.com/posts/taylorator
Stay curious.
nostr:npub1wnlu28xrq9gv77dkevck6ws4euej4v568rlvn66gf2c428tdrptqq3n3wr question about FPPS.
Suppose you wanted to drain that pool and punish it for using the FPPS model. Is it possible for a group within the pool to mine their templates but withhold submitting a solved block so that they keep getting paid the subsidy but the pool loses revenue?
If enough of their hashrate does this then the pool goes bust unless they can manage securing funds some other way. Or perhaps that’s already happening and hence why FPPS pays out up to 30% less than expected (according to your findings at Ocean)?
Sorry for the newb question.
Agreed. It’s odd how it popped up everywhere all at once.
Something seemingly overlooked in all the Deepseek talk is that Google released a successor to the transformer architecture recently [1].
For anyone who doesn’t know, virtually all of the frontier AI models are based on a transformer architecture that uses something called an attention mechanism. This attention helps the model accurately pick out relevant tokens in the input sequence when predicting the output sequence.
The attention mechanism updates an internal “hidden” memory (a set of 3 learned vectors called query, key, and values respectively) when trained but once training is complete the model remains static. This means that unless you bolt on some type of external memory in your workflow (e.g. store the inputs and outputs in a vector database and have your LLM query it in a RAG setup) your model is limited by what it has already been trained on.
What this new architecture proposes is to add a long term memory module that can be updated and queried at inference time. You add another neural network into the model that’s specifically trained to update and query the long term memory store and train that as part of regular training.
Where this seems to be heading is that the leading AI labs can release open weight models that are good at learning but to really benefit from them you’ll need a lot of inference time data and compute which very few people have. It’s another centralizing force in AI.
There might still be some juice left to squeeze here. A week ago Google dropped a successor to the transformer architecture that scales memory so context windows can grow significantly larger and with better performance:
If you squint, the global race for AI dominance looks a lot like competition to produce the next block.
Many independent groups with advanced chips and enormous energy budgets are all racing to produce the next latest output to build an economy on top of.
Data centers around the world are producing models that are the input to much of what happens in companies all over. In many cases we have the same data centers supplying both the AI and bitcoin mining markets.
This is clearly an arms race and unlike with bitcoin interests are not aligned. In my limited spare time I've been thinking (I'm just an npub with some relevant experience but nothing close to an expert...) about how to join these two industries. It's possible. Likely even. It might be inevitable, but still we need more capable people working on this today.
Does anyone know of projects working in this area? Ideally as a protocol. We have starting points like OpenDiLoCo [1]. #asknostr
This is how states who can’t print their own currency end up building a “strategic reserve” by squeezing coins from those who own them.
Beware of governments wanting to create reserves. Instead of pumping your bags they might just cut a hole in them.
Satoshi left his stack as a bounty for eventual quantum computers to capture. I believe this was intentional and probably the fairest way to distribute those initial coins.
The thing to watch is who creates block templates and how many are they creating. If one party creates the majority we are in deep trouble. This doesn't necessarily cause a fork but it could be real bad if we attempted a soft fork under these circumstances.
Totally agree that futures markets should operate close to the protocol in a rapid automated fashion. This leads to more liquid and more free markets.
I don't think we need to worry about public mempools becoming unreliable. We can deal with that by looking at what blocks miners are working on. Check out this proposal: https://delvingbitcoin.org/t/second-look-at-weak-blocks/805
Thanks for the interest! I think it could be a big deal too. I'm working hard to make it a reality. :)
Antpool is already constructing nearly the majority of blocks via their proxy pools [1] (not sure about the latest figures, but with Foundry growing in dominance we’re not healing regardless).
It’s encouraging to see new pools like Ocean growing and pushing template construction back to the hashers. That alone isn’t sufficient but it’s a big step in the right direction.
Really appreciate your hard work on this!
LLMs are a form of proof of work.
Machu Picchu: an ancient citadel.
Pikachu: a legendary creature capable of channeling energy to defend itself.
Bitcoin: all of the above
Block space is only valuable because sats are valuable. If pools don’t play nicely with block template construction they’ll force a fork around them that can continue indefinitely.
If banks become miners and sell block space via some futures market not tied closely with the protocol and the public mempools become unreliable the miners will be fired by consensus. That’s why I’m excited for the work nostr:npub16vzjeglr653mrmyqvu0trwaq29az753wr9th3hyrm5p63kz2zu8qzumhgd is doing with #ehash
https://delvingbitcoin.org/t/ecash-tides-using-cashu-and-stratum-v2/870/32
Seeing lots of talk about DeepSeek so I want to add my 2 sats.
It’s an impressive model and likely to be part of a longer series of even more impressive releases from that group. I’m happy to see it open sourced.
That being said, training an LLM via reinforcement learning can lead to dangerous results if the reward mechanism isn’t crafted carefully. Part of the reason the big players are relatively slow to make big leaps is because they’re focused on alignment [1].
As an analogy: yes you can get to your destination faster if you drive 4x the speed limit, but we put speed limits in place to limit the number and severity of accidents. (I’m a hypocrite here because I’m a speed demon)
👀 nostr:note1scazw9jax92een2nfjz90n8flyta5lufppzvdjutkkws5wsnrpaqhp3e5n
“What does bitcoin do?!”
“It can’t do the job. It’s not money.”
Painful to watch. Hard to imagine a free, open market digital network worth over $2T isn’t meeting some demand. Maybe he’s just smarter than everyone else.
The centralization doesn’t just come from having to purchase more storage space. That’s part of it, sure, but there’s much more to it.
For one thing if the blocks were significantly bigger then it would take much longer to sync then chain and we’d have fewer people willing to wait. Adversarial actors at the ISP level would also be more likely to detect bitcoin traffic and throttle or drop the packets entirely. It also takes more computational resources to verify larger the larger blocks so CPU could become a bottleneck.

