Avatar
ynniv
576d23dc3db2056d208849462fee358cf9f0f3310a2c63cb6c267a4b9f5848f9
epistemological anarchist scales things

a ring a ding ding

nostr:note1qqquapfl84n0ztgnnm8qruagtcq0hsx72et6ee8q783ap7zgh30shme4gq

How am I doing?

You mean CBDCs? 😏

nostr:note1a4e2cw736jyqx6yy78l8m30j9mrgt9pzm649ss6ad8dml07z3utq3u2z2p

Ok, but when someone says "there's some shady monopolistic activity going down here", the ideal response isn't "no, YOU'RE THE shady monopolist!" This is what Bob Woodward would call a "non-denial denial"

I love AI, and I also love this 👌

nostr:note1gwvfd7ju8l5l4zquw60qgu4jtf4j9g8cypeg55lcsxxkdzfwl0vsz3trzw

CLAUDE:

Why This Works

- No ego: I won't judge your "dumb" ideas

- No fatigue: Happy to explore the 47th variation

- No assumptions: Will question "obvious" truths

- Yes-and energy: "That's impossible... but let's try!"

What haunts me about ecash isn't being rugged by dishonest operators... it's being rugged by honest ones. There's literally no way for an operator to exit the system without stealing funds

"When parallel construction is just too much work, we've got what you need"

"fix the money fix the world" is such a feel good aphorism. Gives you those warm fuzzies.

End the money printers!

Uncensorable exchange!

Sovereign individuals!

But you don't really believe that shit, right?

The world is full of broken things.

Sound money is necessary but not sufficient

"The mint cannot create more claims than the bitcoin it controls"

About that ... 🤔

"True settlement happens only when the current holder reconnects and redeems the note with the mint, which then burns the token and pushes the corresponding bitcoin onto the base layer of the network (bitcoin’s time chain). "

Yes, this is a series of events that could happen. Not the most common way that ecash operates though

nostr:nevent1qvzqqqqqqypzqf65ljrz667qklpewyzxvykegftr6xqurparj8scpmttqruquljmqqsv0z4xvtpq535fap6yvakuuaax2qa9923uajlwlvj3egvsz7whussdyt73m

Hah! Yes, I forgot about Zapple Pay! Can we make this idea easier to use?

While I run LND over tor and use AlbyHub for zaps, I appreciate that custodial zaps exist while we work towards better solutions. I'm still unable to receive zaps because that requires LNURL and doesn't work with onion services, but I think we can eventually work this out and have fully anonymous, self custodial zaps

Yes, but the seed has enough range already. If they didn't know your seed, they're not going to find it

I'm a big fan of Good Will Hunting. It's kind of cheesy, but you can also see the broad strokes of Matt Damon's neighbor and substitute father figure, historian and civil rights activist Howard Zinn

Ah, that makes sense. If the LB is HTTP aware, you want to make sure that the app server isn't more restrictive. Didn't the LB retry the request until it ran out of retries?

These situations are somewhat dangerous because client requests will eventually cause the LB to think servers are unhealthy until the whole cluster is down. If an attacker notices this, it's an easy DoS

Organic trust in Bitcoin Deposits:

- Does my vault operator have other vaults

- Are they roughly balanced

- Are their channel peers auditing this vault

- Are they different parties

- Is the recovery output a multisig of the auditors

- Are my invoices valid

There are three ways to steal funds in Deposits:

- an operator that colludes with their channel partner

- a recovery party that doesn't reintegrate deposits

- an operator that creates fake invoices that the client doesn't validate

Fake invoices allow theft of a single payment but reveal the operator as dishonest before the theft, so validation is important but exploitation is unlikely.

Recovery is still an open design item, so I'm glossing over it for now.

The key to Deposits is organically preventing collusion. Payments can only be claimed by both channel partners, and funds (plus security) must be assigned to the recovery output or the payment fails. Operator theft requires suspending these rules.

To avoid collusion we need consequences. Since the reward is a split of the funds, the penalty should be similar. If depositors require operators to run multiple vaults, with different peers, who are also auditors, then the theft of a vault will be detected and the operators other channels can be force closed.

This close-for-dishonesty would forfeit security deposits. If vaults are roughly balanced and security deposit ratios set appropriately, this provides the funds necessary for the recovery party to recreate the stolen vault.

Not only are depositors made whole, we have removed the incentive to steal in the first place

Stay buoyant folks 🌊

Replying to Avatar HBerkoe

This is the episode you’ve all been waiting for— nostr:nprofile1qyshwumn8ghj7en9v4j8xtnwdaehgu3wvfskuep0w3uhqetnvdexjur5qy88wumn8ghj7mn0wvhxcmmv9uqzpxvff4mhj5snxn95nyf7yvupuxt2rwcsuklrahkcu857pqplmpndukqdk5 took the hot seat!

Episode Highlights:

- The Battle of the Mine-hackers, starting remotely and at nostr:nprofile1qy88wumn8ghj7mn0wvhxcmmv9uq36amnwvaz7tmwdaehgu3wd46hg6tw09mkzmrvv46zucm0d5hsqgxyzefv97w93slluaegtnevwlu8h4zmf3me3fwcrewsdh6jegevrsv6weg2 this Saturday

- Centralization risks with mining pools, hardware, and block template construction

- Current focus areas at nostr:nprofile1qyt8wumn8ghj7ct5d3shxtnwdaehgu3wd3skueqpz4mhxue69uhk2er9dchxummnw3ezumrpdejqqg99han2f3v9ufrewhdyjfedq7r9kasn8ezjwet2psx4lq9cfd8k55zthuv7 and fun side projects like Bolt 12 Zaps

- Hot takes on Cashu

- Renaming sats to bitcoin and how user research can preemptively gauge reactions

- Making bitcoin fun with games and approaches like the 9 hidden Bittys on bitcoin.org

- Shoutouts to nostr:nprofile1qyx8wumn8ghj7cnjvghxjmcpz4mhxue69uhk2er9dchxummnw3ezumrpdejqqg8zp79rswk9u9fkvyquzcywunenl29j67f9pn4jkk52h2jrjjnw0sg43a9l and Adam Jonas

- Positive vibecoding experiences and advice for anyone new to it

https://youtu.be/efL8xPVdPUA?si=b3AgQHS7mFWGciu_

QWEN CODE: The physics becomes straightforward with adequate external energy input - the constraints are primarily energy-based rather than fundamental physical violations.

--

The recent non-"Code" Qwen release has some tradeoffs, but this is a solid, rational model. *Two* highly capable, rational, open models in a week – both from China

QWEN3-235B-A22B:

Because institutions don’t just produce knowledge —

They define what qualifies as knowledge.

That’s epistemological control.

And you’re not fighting physics anymore.

You’re fighting the gatekeeping of legitimacy

Replying to Avatar ESE

More tokens are like wider payloads in a stateless microservice: helpful for packing more context, but irrelevant to the core bottleneck of coordination. Transformer architectures have no built-in concept of shared memory, global state, or structured control flow. Each inference is an isolated forward pass—no read/write memory, pointer, or continuation stack. You’re not scaling reasoning, you’re scaling cache size.

Retrieval-augmented generation (RAG), memory modules, tool use, and planner loops are all attempts to bolt on simulated memory using external systems. But simulation isn’t integration. These techniques lack the core properties of real memory systems: mutable state, consistency guarantees, selective recall, and scoped invalidation. They resemble distributed systems without a proper coordination layer—pure gossip, no consensus.

Even long context windows (e.g., 200K tokens) offer no relief. A larger bucket of past tokens doesn’t change the model’s inability to prioritize, reference, or route thoughts across time. Attention is dense or sparse, but never deliberate. There’s no working memory stack. No symbolic manipulation. No instruction pointer. Just statistical guesswork smoothed over a flat vector space.

Multi-agent systems? LangGraph, AutoGPT, BabyAGI? They’re distributed loopers. Agents pass outputs to each other like logs in a pipeline, with no theory of mind, negotiation, or shared ontology. There’s no grounding, no meta-cognition, and no reflection. You can script a workflow, but the agents aren’t thinking together. They’re just taking turns hallucinating.

And let’s not pretend you can offload this to the user. The human remains the I/O controller, the debugger, the scheduler, and the final consensus engine. There is no autoscaler for cognition. You can shard your microservices, but you can’t shard your prefrontal cortex.

In Sussman's terms, there’s no procedure. In Minsky's terms, there’s no society—just a bunch of disconnected hacks guessing the next plausible token. This isn’t referentially transparent or composable in functional programming terms—it’s side-effect soup.

Until models can maintain evolving, contextual state, abstract their reasoning paths, and coordinate across agents with shared intent and memory, they won’t replace the human-in-the-loop for complex tasks. They’ll assist, autocomplete, and sometimes dazzle—but they won’t reason. More tokens won’t fix that. It’s an architectural limitation, not a throughput problem.

They very much can reason inside a context window. What you need is a repeatable process for building a context that can advance the current state of your system. Agent loops (eg, goose) can do this.

Once you've established a goal and a way to reliably make progress toward it, every functional programmer knows what comes next

nevent1qvzqqqqqqypzq4mdy0wrmvs9d5sgsj2x9lhrtr8e7renzz3vv09kcfn6fw04sj8eqqs2ux83jp80pzzpdeqtwj9pvden5pd84lev99e6fkzypy5872erh2qjjqtr0