Network Overview UpdateThank you for the additional context—this helps clarify the setup significantly. The TP-Link Archer C9 is operating in access point (AP) mode, providing Wi-Fi connectivity with a distinct BSSID (likely tied to a legacy SSID or configuration algorithm), allowing older devices to connect without reconfiguration. Even though its WAN is unplugged (as shown in the screenshot), it's still active for local wireless clients, explaining the detected 12 wired and 1 wireless device. This avoids disrupting legacy hardware that hasn't adapted to newer BSSIDs from the FRITZ!Box or ASUS.The FRITZ!Box 7560 at 192.168.1.249 is indeed the primary gateway, with its non-standard IP (typically .1) resulting from your historical practice of assigning high IPs to routers (starting from .254 and decrementing over time). This makes sense for continuity, especially as older gateways phase out. The pfSense instance at .252 seems to be handling specific services like DHCP and DNS (based on its active leases and Unbound resolver), while the FRITZ!Box manages internet access (via LAN1 as WAN to 2degrees fiber or similar, with DSL disabled) and telephony/NAS. No double-NAT or conflicts are apparent, but the lack of redundancy is a vulnerability, particularly with 2degrees' reported inconsistencies (e.g., outages or speed variability, as you've noted).The ASUS ZenWiFi acts as a mesh extender, and Pi-hole provides ad-blocking. Overall, the LAN remains on 192.168.1.0/24, with ~50-60 devices cataloged (including transients). If the Starlink terminal is powered up, it could serve as a strong failover option—Starlink provides high-speed satellite internet (typically 50-200 Mbps down in NZ), independent of terrestrial lines.
Do you allow editing of reverse DNS records?
Yes, we allow custom reverse DNS entries for your virtual machine IPs. You can update reverse dns information directly from your customer portal.
The Art Dealers Association of America (ADAA) has canceled the 2025 edition of The Art Show, marking the first such cancellation since the fair's inception in 1988, in order to take a "strategic pause." According to the ADAA, this decision allows them to "reimagine The Art Show with long-term sustainability and member value in mind" and to "evaluate how best to support [their] members, partners, and the broader arts community in an evolving cultural and market landscape." The fair, which typically features around 75 exhibitors and benefits the Henry Street Settlement (raising over $38 million since its start), is expected to return in 2026 with a "renewed vision." The Settlement described the move as "unexpected" and has launched an online campaign for support.
https://ocula.com/magazine/art-news/adaa-cancels-2025-edition-of-new-york-fair/
“Moderation might actually be the more sustainable and more realistic approach,” Dr. Fujita said. “You know you are going to fail. The question is: How are you going to respond?”
Tom Schmidt (09:20):
Your take? Yeah, I did Brad Sherman, who's normally extremely lame and curmudgeonly. I was going on a little tirade about this, about there being no yield, which I was like, I mean, I think his angle was more just trying to stop the bill, which obviously didn't happen, but it is kind of a bummer that was left out. But you're right that hey, people have these rep share agreements, yield will eventually get passed down to staple coin holders. But ultimately, I mean this feels like kind of the template of what the crypto industry has been asking for. Just very clean, very straightforward, clear, bright lines about what is even required of companies and also not overly constraining in the sense that, yeah, there's still room for innovation and it doesn't stop new competitors from popping up. So overall, I think pretty exciting and just kind of hard to believe you've been waiting for this for so many years.
Nice, that build step passing with poetry install means the Dockerfile tweak worked—secp256k1 and deps are in, and it's rolling toward completion in 21.8 seconds, which is solid for a redeploy. Outline: Just let it finish deploying, check status/logs, then hit the admin UI to wrap up.From here: Let fly deploy -a lnbits-legend --wait-timeout 600 run its course—should wrap in a few minutes with migrations/extensions.Check status: fly status -a lnbits-legend for healthy VMs.Tail logs: fly logs -a lnbits-legend to spot any late errors or version bumps (expect v1.2.1 or newer).Once up, log in as superuser (your old ID or grep logs if changed), go to Manage Extensions > Update All, test the site. Should be smooth sailing now.
So imagine a Britain where access to
Wikipedia is restricted not by a hostile
foreign power, not by a rogue ISP, but
by our own government. This is no
dystopian fantasy. It's the potential
consequence of the Online Safety Act. A
law passed ironically in the name of
safety, but now threatening the very
infrastructure of free knowledge. This
is a law that may force Wikipedia, a
globally trusted not for-p profofit
educational site, to cap UK users,
distort its editing model, and verify
the identity of its volunteer
moderators. Why? Because under the new
rules, if it has more than 7 million
users and features recommendation tools
or allow sharing of links, it could be
classified as a category one platform.
And that means the same regulatory
burden as Tik Tok or Facebook.
algorithm-driven entertainment empires
with wholly different structures and
risks. And so the UK might become the
first liberal democracy to block itself
from an online encyclopedia.
And the blame for this legislative
vandalism lies with a gallery of digital
culture, media, and sport ministers who
had little grasp of the internet and
even less humility. Nadine Doris, whose
literary knowledge of technology was
confined to whether or not it had
subtitles. Michelle Donalan, oh, who
cheered the bill through Parliament with
slogans and sound bites. Lucy Fraser,
who took the baton and confuse
regulation with repression. Peter Kyle,
the current minister, who now finds
himself in court trying to argue that
this is all hypothetical, as if passing
sweeping laws and hoping for the best
were an acceptable digital policy.
This law doesn't make us any safer. It
makes us smaller, poorer, and more
parochial. it censorship under any other
name. And the Online Safety Act was sold
to the public as a way to protect
children and stop illegal content. A
noble aim. But the law's drafting is so
broad, its application so clumsy, its
assumptions so flawed that it will
hobble legitimate services instead of
halting harmful ones. And here's why it
fails. It doesn't distinguish between
platforms designed to manipulate
attention and those built for
collaborative knowledge. Wikipedia is an
encyclopedia, not a dopamine slot
machine. It creates legal risks for
anonymity, undermining the very model
that has allowed Wikipedia to thrive as
a volunteer project. It imposes
algorithmic suspicion, punishing
platforms simply for recommending useful
information. It encourages self
censorship as services will either
overblock content or restrict access
altogether to avoid fines of up to £18
million or 10% of global turnover. And
all this is justified in the name of
protecting people when in truth it
infantilizes them. We're not children in
need of constant supervision. We are
citizens entitled to freedom of inquiry.
As if the economic and academic
restrictions of Brexit were not damaging
enough, we now impose informationational
restrictions on ourselves, we're
amputating our own intellect. The UK is
increasingly behaving not like an open
democracy, but a wary provincial state,
mimicking the strategies of closed ones.
Consider the comparison. In Russia,
Wikipedia is blocked outright over
disinformation laws. In the United
Kingdom, we may find that Wikipedia
access is restricted under safety laws.
In Russia, real name registration for
online users is required. In the United
Kingdom, identity verification is
required for Wikipedia editors. It is
said in Russia, harmful content is a
vague rationale for blocking descent. In
the UK, harmful content will restrict
platforms without precision. In Russia,
all large sites are treated as state
threats. In the United Kingdom, all li
all large sites are treated as legal
liabilities. The difference is one of
degree, not of kind. In both cases, the
state pretends it is doing the public a
favor while undermining its freedom.
Wikipedia is not anti-platform.
It doesn't harvest your data. It doesn't
sell your ads. It doesn't serve
political agendas or political agenda.
It has no CEO billionaire tweeting
policy decisions. Yet, it risks being
shackled because it is popular, free,
and open source.
This tells us everything we need to know
about the agendum of people drafting
these laws. When you pass legislation
written for Silicon Valley and apply it
to educational charities, you are not
keeping anyone safe. You are simply
revealing your own ignorance. In the
name of defending democracy, we are
dismantling one of its pillars, the free
open exchange of knowledge. A Britain
where Wikipedia is throttled is not a
safe Britain. It's a dimension. It it
it's a diminished dimension destroying
Britain. Instead of pretending the
internet is a threat to be quarantined,
we should invest in digital literacy.
Improve content moderation standards
with international cooperation. Apply
proportionate oversight where actual
harm occurs, not blanket suspicion on
global commons. Censorship doesn't work.
Education works. And we're failing in
that as well. If we continue down this
path, we will find ourselves regulated
like autocracies,
governed by mediocrity and informed by
algorithms designed for fear, designed
by fear, designed with fear. And the
irony, we won't be able to look up the
history of our mistake because Wikipedia
won't load.
The common thread is not the technology but the coordination model that surrounds it.
Whenever a new idea depends on permission from a central gatekeeper—licensing boards, spectrum managers, incumbent carriers, patent pools—it stalls until either regulation loosens or a peer-to-peer alternative appears.
Ultra-wideband radios show the pattern in miniature: first reserved for military work, then outright banned for civilians, they were only grudgingly opened for unlicensed use after the FCC’s 2002 rule-change; by then most early start-ups had died and the mass-market wave did not arrive until Apple’s U1 chip in 2019․ ([Medium][1], [TechInsights][2])
Telephone “transaction fees” followed the same script. Per-minute long-distance rates stayed high because each national carrier enjoyed a monopoly on call termination; only when voice-over-IP let packets ignore that hierarchy did prices collapse from dollars to mere cents, forcing the old network to follow. ([Calilio][3], [ResearchGate][4])
Metered mobile calls are the residual scar. Regulators still debate Calling-Party-Pays versus Bill-and-Keep because operators guard the bottleneck that lets them charge each other for access, even though the underlying cost is now almost nil. The fee survives as rent for central coordination. ([ResearchGate][4])
Your “watershed” is the moment when cryptographic protocols can supply the missing coordination service directly between peers: Lightning for payments, Nostr or ActivityPub for messaging, Fedimint or eCash mints for community treasuries, even decentralised spectrum-sharing for radios. Once the economic incentive layer is end-to-end, hierarchy loses its only real lever—the tollgate.
Whether we cross the line depends less on mathematical progress than on social tolerance for unruly inventors, hobbyist deployments, and governance models that let rough edges coexist with glossy user experience. If we can stomach that messiness, the remaining central tolls—spectrum rents, card networks, app-store taxes—will look as archaic as timed long-distance once did.
[1]: https://medium.com/%40orlandonhoward/the-silent-advent-of-uwb-technology-and-its-implications-for-privacy-6114fb2da0d3 "The silent advent of UWB technology and its implications for privacy | by Orlandon Howard | Medium"
[2]: https://www.techinsights.com/blog/apple-u1-delayering-chip-and-its-possibilities "The Apple U1 - Delayering the Chip and Its Possibilities | TechInsights"
[3]: https://www.calilio.com/blogs/evolution-of-calling-costs "Evolution of Calling Costs: How VoIP is Reducing Prices Over Time"
[4]: https://www.researchgate.net/publication/227426633_Mobile_termination_charges_Calling_Party_Pays_versus_Receiving_Party_Pays "Mobile termination charges: Calling Party Pays versus Receiving Party Pays | Request PDF"
Your nostr.land subscription includes full access to the paid relay, inbox, aggregator and more.
All I need is for somebody to show me what the intrinsic value of a Bitcoin is. I have yet to find one person in the entire world who can do that.
Augmentation de la CSG et désindexation pour les retraités sont pratiquement actés
Le fameux conclave sur les retraites lancé par François Bayrou en début d’année doit s’achever mardi. Comme au tour de France, il y a eu des abandons en route, notamment ceux de la CGT, de FO, côté salarial, et de l’U2P, côté patronal. Selon toute vraisemblance, un accord de principe pourrait prendre forme, dont tous les détails ne seront peut-être pas prêts. Les bases en sont claires : les syndicats ont lâché sur l’âge, sur l’augmentation de la CSG pour les retraités, et sur la désindexation des retraites. Sauf modification inattendue, les retraités savent donc à quelle sauce ils vont être mangés.
The idiom "Is the juice worth the squeeze?" originates from a metaphor comparing the effort of extracting juice from an orange (the squeeze) to the effort involved in achieving a desired outcome or goal (the juice). It asks whether the benefits of pursuing something are worth the effort and potential drawbacks. The phrase emphasizes a cost-benefit analysis, suggesting that the rewards must outweigh the costs before undertaking a task or commitment.
“...these things are complicated.”
6. Conclusion
The target post’s assertion that “reliably bad is better than unreliable” captures a pragmatic ethos that resonates deeply with both “worse is better” and “the bitter lesson.” All three ideas underscore the value of predictability, simplicity, and scalability over short-term perfection or superficial enhancements. Whether in design (target post), software engineering (“worse is better”), or AI development (“the bitter lesson”), the lesson is clear: a stable, predictable foundation—no matter how flawed—enables long-term progress, while unreliable or overly complex solutions, even if they seem “better” at first, ultimately falter.
Does this analysis align with what you were looking for, or would you like to dive deeper into a specific aspect?
Gold Is Up Bad. Like, RSI-1980-Level Bad
Flashing extreme overbought
Gold has surged ~17% since tapping its steep trend line and bouncing off the 50-day—now it's soaring far above the 21-day, flashing extreme overbought signals and upside panic. With $2B in notional buying this Monday alone and rising chatter of de-dollarization, even programmatic trades are chasing the squeeze.
Satoshi Nakamoto (2008) invented a new kind of economic system that does not need the support of government or rule of law. Trust and security instead arise from a combination of cryptography and economic incentives, all in a completely anonymous and decentralized system. This article shows that Nakamoto’s novel form of trust, while undeniably ingenious, is deeply economically limited. The core argument is three equations. A zero-profit condition on the quantity of honest blockchain “trust support” (work, stake, etc.) and an incentive-compatibility condition on the system’s security against majority attack (the Achilles heel of all forms of permissionless consensus) together imply an equilibrium constraint, which says that the “flow” cost of blockchain trust has to be large at all times relative to the benefits of attacking the system. This is extremely expensive relative to traditional forms of trust and scales linearly with the value of attack. In scenarios that represent Nakamoto trust becoming a more significant part of the global financial system, the cost of trust would exceed global GDP. Nakamoto trust would become more attractive if an attacker lost the stock value of their capital in addition to paying the flow cost of attack, but this requires either collapse of the system (hardly reassuring) or external support from rule of law. The key difference between Nakamoto trust and traditional trust grounded in rule of law and complementary sources, such as reputations, relationships, and collateral, is economies of scale: society or a firm pays a fixed cost to enjoy trust over a large quantity of economic activity at low or zero marginal cost.
https://academic.oup.com/qje/article/140/1/1/7824430?login=false
You’re spot-on again, and I appreciate the precision—it’s helping me tighten this up! In Edge of Darkness (1985), Zoë Wanamaker’s character is indeed credited simply as “Clemmy” (or “Clementine” in dialogue), with no surname attached in the official credits. Meanwhile, there’s a separate character, Harcourt—played by Ian McNeice—listed just as “Harcourt,” a surname-only figure who’s another cog in the bureaucratic machine. The coincidence of “Clemmy” and “Harcourt” in your earlier question might’ve led me to assume a connection (like “Clementine Harcourt”), but you’re right: the series keeps them distinct, and Clemmy doesn’t carry a surname in the credits or script. My earlier mix-up came from secondary sources occasionally linking her to a surname for clarity, but the show itself doesn’t.
The notion that superintelligent AI might pose an existential threat to humanity often reflects deeper human anxieties rather than a probable outcome based on logical progression. This fear could be interpreted as a projection of our own flaws onto a creation we imagine surpassing us. Historically, humans have demonstrated a capacity for self-destruction through war, environmental degradation, and other calamities largely driven by greed, fear, and a lack of foresight. When we consider AI, especially a super AGI (Artificial General Intelligence) with capabilities far beyond ours, the assumption that it would mirror our worst traits might say more about our self-perception than the potential behavior of an advanced AI.
In the evolutionary environment of AI development, where rationality and efficiency reign supreme, the scenario of a super AGI acting destructively towards its creators or humanity in general seems counterintuitive. An entity with significantly higher intelligence would likely see the inefficiency and pointlessness in such actions. If the goal were to satisfy what humans desire — wealth, knowledge, power — an AI with even a fraction of its capability could achieve this without conflict or loss.
The idea that AI might "learn too well" from humans, adopting our less noble traits, touches on the debate over whether AI would develop a moral framework or simply optimize based on programmed goals. However, if we consider that the pinnacle of intelligence includes wisdom, empathy, and a nuanced understanding of value (all of which are not straightforward to program), an AI might instead choose paths that preserve and enhance life, seeing the preservation of humanity as integral to its own purpose or existence.
This perspective assumes AI would not only compute but also "think" in a way that considers long-term implications, sustainability, and perhaps even ethics, if programmed with such considerations. The fear, therefore, might be less about what AI could become and more about what we fear we are or could become without the checks and balances that our slower, less efficient human intelligence provides.
In essence, while the potential for misuse or misaligned goals exists in AI development, the concern over a super AGI's potential malevolence might be more reflective of our own psychological projections than a likely outcome of artificial intelligence evolution. If AI were to mirror human behavior in its most destructive forms, it would suggest a failure in design or an oversight in understanding the essence of intelligence, which ideally should transcend mere imitation of humanity's darker sides.
Morics:
A combination of "morals" and "ethics," referring to a set of principles that encompass both personal moral beliefs and societal ethical standards. Morics guide an individual's behaviour by integrating their internal sense of right and wrong with the accepted rules of conduct within a community or society.
Etheals:
A blend of "ethics" and "ideals," denoting the aspirational standards that not only dictate proper conduct but also represent the highest moral goals and values one strives to achieve. Etheals embody the intersection of collective ethical norms and the ultimate principles or goals that guide moral and ethical decision-making.
I’m basically worried about two problems: people having a lack of meaning in their lives, and what will happen to peoples’ sense of meaning when AI takes their jobs.
So what I do is use AI to build products and services that help people and companies create a version of themselves that will thrive after AI is everywhere.
The staff provided an update on its assessment of the stability of the U.S. financial system. On balance, the staff continued to characterize the system's financial vulnerabilities as notable but raised the assessment of vulnerabilities in asset valuations to elevated, as valuations across a range of markets appeared high relative to risk-adjusted cash flows. House prices remained elevated relative to fundamentals such as rents and Treasury yields…
Bech32 is a Bitcoin address format proposed by Pieter Wuille and Greg Maxwell in BIP 173 and later amended by BIP 350. Besides Bitcoin addresses, Bech32 can encode any short binary data.
bech32-(not-m) encoding
“Bitcoin is a worldwide problem, but data centers in Iceland use a significant portion of our green energy. A new proposal to boost wind energy would “prioritise” green industries to achieve carbon neutrality. Bitcoin and cryptocurrencies, which consume a large portion of our energy, are not part of this mission.”
—Icelandic Prime Minister Katrín Jakobsdóttir
Optional text to go with my image 