Avatar
Tim Bouma
06b7819d7f1c7f5472118266ed7bca8785dceae09e36ea3a4af665c6d1d8327c
| Independent Self | Pug Lover | Published Author | #SovEng Alum | #Cashu OG | #OpenSats Grantee x 2| #Nosfabrica Prize Winner

Yup. I discovered I live on unceded territory. And the Supreme Court, making the favourable rulings, has declared itself the highest authority in the land - independent and above elected government.

Unfortunately, many legal systems conclude otherwise. For example, in Canada, you don’t actually ‘own’ property - it is a right granted (revoked) by the Crown. People in the Province of British Columbia are waking up to a nasty surprise that they don’t actually own their land, so can’t get mortgage renewals.

It's the other way around. Control is a fact. Ownership is a conclusion. The EU lawmakers have gotten this exactly backwards and that's why they are getting into trouble.

Law follows control - not control follows law.

Sharing a photo from my snowshoeing trip a couple of days ago.

As a kid, I remember ‘slop’ as ‘pig slop’ - the leftover food and edible garbage you fed to pigs that would eventually land back on your plate as bacon.

The quote below is from Andrej Karpathy.

Funny thing - I feel the complete opposite.

‐-------------------

"I've never felt this much behind as a programmer. The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between. I have a sense that I could be 10X more powerful if I just properly string together what has become available over the last ~year and a failure to claim the boost feels decidedly like skill issue. There's a new programmable layer of abstraction to master (in addition to the usual layers below) involving agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations, and a need to build an all-encompassing mental model for strengths and pitfalls of fundamentally stochastic, fallible, unintelligible and changing entities suddenly intermingled with what used to be good old fashioned engineering. Clearly some powerful alien tool was handed around except it comes with no manual and everyone has to figure out how to hold it and operate it, while the resulting magnitude 9 earthquake is rocking the profession. Roll up your sleeves to not fall behind."

‘Architect first, then engineer.’

This is the biggest lesson, I’ve learned over the years. If you spend time understanding how things hang together, the engineering comes naturally. If you do it the other way, you build Rube Goldberg machines. I’ve built my share of those, so I now spend time up front architecting, or refactor as soon as I find a better architecture.

Replying to Avatar Avi Burra

Merry Christmas nostr!

Join me for episode 143 of nostr:nprofile1qqs2m82zyqlayjqw5tjuf3j9jwszwuy2a03tq24xp0tmr4nxm2jmprgpr4mhxue69uhkummnw3ezucnfw33k76twv4ezuum0vd5kzmp0qy2hwumn8ghj7un9d3shjtnyv9kh2uewd9hj7px8dte tomorrow, December 26 at 1600 UTC (1pm ET)

My guest this week can forget more about podcasting 2.0 in a day than most can remember in a lifetime – the bridge builder himself, nostr:nprofile1qqs00y32ptdnlfxa5hhv4f30dalwv9vl0a27pqpkdpkx3cyrstp50zqpg3mhxue69uhnwumjwgmkx6revvm8vmrg0fcxxvngdsmxc7t4denhvmr4da585undwsmnv6mzwv6xkmtev358y7r0v94kkcn3w4skgtnvda3kzmqpz3mhxue69uhhyetvv9ujuerpd46hxtnfdul9z04a

Stay tuned for the v4v music segment – another hidden gem lined up. This show will be the first episode streamed live to nostr from my sovereign zap.stream rig 👀

Listening…

“We become what we behold. We shape our tools, and thereafter our tools shape us.”

Marshall McLuhan

Replying to Avatar lkraider

I had my AI counter-argument bot cross-check me:

A lot of the skepticism you summarized is directionally right (the hype is real; “practical advantage next year” claims are usually marketing), but several of your strongest-sounding bullets rely on shaky premises or outdated “folk numbers.” If you want an intellectually honest view, you end up in an uncomfortable middle: quantum computing is not a “scam,” but it is also not close to the grandiose promises investors were sold.

Here are key assumptions in your writeup that could be false:

• “No progress on factorization in 20 years.” The algorithms (Shor) are old, yes, but the engineering resource estimates and compilation techniques have improved a lot. For RSA-2048 specifically, Gidney+Ekerå’s well-known estimate was ~20 million physical qubits under explicit assumptions, and in 2025 Gidney published a new estimate claiming “less than a million noisy qubits” (still with demanding assumptions and still far beyond today). 

• “Millions of physical qubits per logical qubit” as a fixed rule. Overhead depends on physical error rates, code choice, connectivity, and what you’re trying to do (memory vs T gates, etc.). IBM is explicitly arguing for qLDPC-style paths that reduce qubit overhead compared to surface-code baselines, at least for some components (e.g., memory). 

• “NISQ can only do contrived demos.” Many “supremacy/advantage” tasks are contrived, yes, but the real question is whether error-corrected logical qubits can scale with improving logical error as you increase code distance. There are now peer-reviewed results explicitly about being “below threshold” (the regime you must be in for scalable fault tolerance). 

Now the strongest opposing expert viewpoint (steelman), in plain terms:

Quantum computing is an engineering program to build a fault-tolerant machine, and the physics is not in serious doubt: we already can create, control, and measure multi-qubit entangled systems; the hard part is driving logical error down faster than system size grows. The most credible “this is real” evidence is not qubit counts or sampling stunts, but demonstrations of error correction improving as redundancy increases (crossing into “below-threshold” behavior). 

Where your skepticism is most justified (real failure modes):

• Incentives are misaligned. Startups and even big labs market “number of qubits” because it’s legible, while the actually predictive metrics (two-qubit gate fidelity distributions, leakage, correlated noise, cycle time, error model stability, decoding latency, logical error per round, etc.) are harder to sell.

• Scaling is brutal in ways that don’t show up in demos: cryogenics, wiring, calibration at scale, correlated noise, fabrication yield, and error decoding/control stacks. Even if the math works, the “systems engineering tax” can kill timelines.

• “Break RSA” headlines are routinely abused. Shor threatens RSA/ECC in principle, but the relevant question is “cryptographically relevant quantum computer” (CRQC) timelines under realistic constraints, not toy factoring demos.

On “is it a scam?”: calling it a scam conflates three different things.

1. Quantum computing as a scientific/engineering field: not a scam. There are serious roadmaps and serious peer-reviewed milestones in QEC. 

2. Quantum computing marketing: often hypey, sometimes misleading, occasionally outrageous. Even pro-quantum voices have publicly complained about hype incentives. 

3. “Quantum” used as a brand for fraud (e.g., “quantum trading bots”): that’s a separate bucket (actual scams) and shouldn’t be used to judge the underlying field. 

A decision framing that avoids hero narratives:

• Option A: Do nothing / ignore QC entirely. Rational if you’re evaluating it as a near-term product category.

• Option B: Treat QC as long-horizon R&D, and demand falsifiable milestones (logical qubits with improving logical error; small fault-tolerant demonstrations; resource estimates with explicit assumptions).

• Option C: Act only where the downside is asymmetric: migrate cryptography to PQC because migration lead times are long and “harvest now, decrypt later” is real even if CRQCs are years out. This is exactly why NIST finalized PQC standards and publishes migration guidance. 

My best recommendation (if you’re an operator/investor/strategist): be skeptical of “revolution soon,” but don’t be skeptical of the threat model. Concretely: ignore qubit-count press releases, track error-correction milestones, and in security planning treat PQC migration as a project you start now, not later. 

Now I’ll argue hard against my own recommendation:

It’s easy to overreact and waste money “quantum-proofing everything” prematurely, especially if your data has short confidentiality lifetime. Also, PQC migration itself has costs and risks (performance, protocol changes, interoperability bugs, long-term confidence in new assumptions). If your organization cannot do crypto-agility well, a rushed migration can reduce security in the near term. The sane middle is staged migration driven by data lifetime and dependency mapping, not panic.

If you want to rewrite your critique into small, testable steps with kill-criteria (so it’s not just vibes):

• Step 1 (kill-criteria: vendor can’t answer): Require any “progress” claim to be expressed as (a) logical error per round vs code distance, (b) syndrome extraction cycle time, (c) error correlations/leakage handling, (d) decoding latency budget. If they only talk “qubits,” downgrade credibility.

• Step 2 (kill-criteria: handwaving assumptions): For “break RSA” claims, demand a resource estimate with explicit physical assumptions (gate error, cycle time, connectivity, decoding, distillation). Compare against conservative published estimates like Gidney’s 2019 and 2025 papers. 

• Step 3 (kill-criteria: no security inventory): If this is about security, do a cryptographic inventory first. If you can’t locate where RSA/ECC are used, “quantum risk” talk is theater. NIST’s transition logic is basically “start before X+Y exceeds Z.” 

One last note on your sources: Reddit threads and polemical blogs are useful for stress-testing narratives, but they’re not good evidence. If you want “strong doubts” that survive scrutiny, look at skeptics who argue from physics/complexity constraints (e.g., Kalai’s “we may hit a wall” position) rather than pure finance/hype rhetoric. 

Thanks for this.

Didn’t see that one coming…

—————-

Beijing Worries AI Threatens Party Rule

BY STU WOO

The Wall Street Journal

Dec 26, 2025

China is enforcing strict guidelines to make sure chatbots don’t misbehave

Concerned that artificial intelligence could threaten Communist Party rule, Beijing is taking extraordinary steps to keep it under control.

Although China’s government sees AI as crucial to the country’s economic and military future, regulations and recent purges of online content show it also fears that AI could destabilize society. Chatbots pose a particular problem: Their ability to think for themselves could generate responses that spur people to question party rule.

In November, Beijing formalized rules it has been working on with AI companies to ensure their chatbots are trained on data filtered for politically sensitive content, and that they can pass an ideological test before going public. All AI-generated texts, videos and images must be explicitly labeled and traceable, making it easier to track and punish anyone spreading undesirable content.

Authorities recently said they removed 960,000 pieces of what they regarded as illegal or harmful AI-generated content during three months of an enforcement campaign. Authorities have classified AI as a major potential threat, adding it alongside earthquakes and epidemics to its National Emergency Response Plan.

Chinese authorities don’t want to regulate too much, people familiar with the government’s thinking said. Doing so could extinguish innovation and condemn China to secondtier status in the global AI race behind the U.S., which is taking a more hands-off approach toward policing AI.

But Beijing also can’t afford to let AI run amok. Chinese leader Xi Jinping said earlier this year that AI brought “unprecedented risks,” according to state media.

There are signs that China is, for now, finding a way to thread the needle.

Chinese models are scoring well in international rankings, both overall and in specific areas such as computer coding, even as they censor responses about the Tiananmen Square massacre, human rights and other sensitive topics. Major American AI models are mainly unavailable in China.

It could become harder for DeepSeek and other Chinese models to keep up with U.S. models as AI systems become more sophisticated.

Researchers outside China who have reviewed both Chinese and American models also say that China’s regulatory approach has some benefits: Its chatbots are often safer by some metrics, with less violence and pornography, and they are less likely to steer people toward self-harm.

“The Communist Party’s top priority has always been regulating political content, but there are people in the system who deeply care about the other social impacts of AI, especially on children,” said Matt Sheehan, who studies Chinese AI at the Carnegie Endowment for International Peace.

But he added that recent testing shows that compared with American chatbots, Chinese ones queried in English can also be easier to “jailbreak”—the process by which users bypass filters using tricks, such as asking AI how to assemble a bomb for an action-movie scene.

“A motivated user can still use tricks to get dangerous information out of them,” he said.

When AI systems train on content from the Chinese internet, it is already scrubbed as part of China’s so-called Great Firewall, the system Beijing set up years ago to block online content it finds objectionable. But to remain globally competitive, Chinese companies also incorporate materials from foreign websites, such as Wikipedia, that address taboos such as the Tiananmen Square massacre.

Developers of ChatGLM, a top Chinese model, say in a research paper that companies sometimes deal with this issue by filtering sensitive keywords and webpages from a pre-defined blacklist.

But when American researchers downloaded and ran Chinese models on their own computers in the U.S., much of the censorship vanished. Their conclusion: While some censorship is baked into Chinese AI models’ brains, much of the censorship happens later, after the models are trained.

Chinese government agencies overseeing AI didn’t respond to requests for comment.

American AI companies also regulate content to try to limit the spread of violent or other inappropriate material, in part to avoid lawsuits and bad publicity.

But Beijing’s efforts—at least for models operating inside China—typically go much further, researchers say. They reflect the country’s longstanding efforts to control public discourse.

Shared via PressReader

connecting people through news

I agree with you, but I need a very specific refutation.

Thanks. This was shared by a cryptograpy teaching professor, I respect. But he is saying that traditional cryptography is pretty much broken due to Shor’s and Grover’s algorithm. I don’t actually believe that, but it is tough to refute.