Avatar
Seeker Erebus
c3ff7322e6a126c49f6c886fb5984e77770e20ea52cc9526b59cec001a65ee7e
A Seeker of Truth amidst the chaos of the modern world.

Core 30 removes options for managing a node. Knots adds more options. Knots can be setup to allow as much "spam" as Core 30. Core 30 cannot be setup to limit "spam" as much as Core 29. Otherwise, it's nearly the same code. This is all you really need to know about the Core vs Knots debate.

Unexpected discovery. Assuming total hyperbitcoinization does occur, even once the block subsidy hits zero, the "break-even" cost to maintain the current all-time high network security in sats/vbyte terms is under 1 sat/vbyte. That assumes no improvements in electrical efficiency too. So even if nearly all of the transactions are occurring off-chain, the few that do need to use onchain can keep the network secure rather easily for hundreds if not thousands of years.

nostr:nprofile1qyt8wumn8ghj7etyv4hzumn0wd68ytnvv9hxgtcpzemhxue69uhks6tnwshxummnw3ezumrpdejz7qpq2rv5lskctqxxs2c8rf2zlzc7xx3qpvzs3w4etgemauy9thegr43sugh36r There's an opportunity right now for significant Cashu adoption in a short burst. Visa and Mastercard have forced Steam to ban thousands of games, and change their platform rules, in order to stay in business.

There are tens of millions of people who are currently seeing a direct pain point around the flow of money, and are actively focused on it to the point of calling politicians by the thousands. Steam is absolutely enormous, and the globe is feeling the impact.

Because Cashu is provably anonymous spending, it solves this problem in totality. If we can anonymize Valve to the payment processors with Cashu, especially if we can save them on fees while we're at it, and get them the dollars they are currently after while doing so, this could be huge for adoption, and for exposure to the bitcoin ethos.

The Achilles heel.

Replying to Avatar Jonathan

https://github.com/google-gemini/gemini-cli

Google is releasing ANOTHER tool with a free tier. The insane amount of money they are burning letting everyone use their models for absolutely free is nuts.

If the service is free, you are the product.

Why am I not seeing any discussion online about SMS verification overload?

I've asked a fair amount of people in person about this, and not one has received every single SMS verification code they ever had to deal with in the proper time window. Literally everyone I've asked has been unable to log into a service because they didn't receive that text in time to use it at least once. A few have lost funds because they couldn't cancel a service, or couldn't pay a bill, because they coudn't access the account. One even has been unable to change their phone provider in months because they can't alter the account, that guy gets less than 1 in 20 messages in time.

Plenty of these services also do not offer the option of 2FA app verification, which would prevent this problem. The modern net is literally breaking in a fundamental fashion for at least a niche of people, they can't do anything about it, and no one seems to be discussing it?

We often say "Fix the money, fix the world." I have a better phrase. "Fix the money, or die."

If the money is not fixed, the inevitable outcome is that every person who has to earn that 'money' consistently in order to feed themselves and keep a roof over their head, will eventually have to work 25 hours a day, 8 days a week, in order to earn enough money just to eat. Or in other words, everyone that works for a living will starve to death and die unless shit changes.

It is far easier to get people to see the bitcoin mission when you point out to workers in the economy that this is the reality that faces them. It's tangibly real to them now.

Recently had my first encounter with nostr-based tools entirely outside of nostr or bitcoin circles. It was with DEG Mods (https://degmods.com/) being highlighted as an alternative to Nexusmods for hosting, given Nexus having a well established history now of banning mods people want that don't play along with DEI ethics. The fact that DEG Mods is built on top of nostr wasn't even mentioned.

This is how nostr actually ends up taking market share. By being the unstoppable source of a valuable service suffering from censorship.

I'm reasonably sure that if anyone had been harmed by them, Info Wars would have been sued already. So they're probably at least safer than most OTC drugs.

If you're using Primal, quickest is going to be getting the Primal wallet activated. If your not, or can't for some reason, quickest is Minibits. Get it, set it up, and in the client, edit your profile so that your "Bitcoin Lightning Address" is the same as in your Minibits client. Looks like an email address.

Minibits is using Cashu, which isn't as good as self-custody lightning, but it's still proving reliable enough.

What options have been developed for using micropayments to fund cloud hosted multiplayer games? #asknostr

The creator of 5D Diplomacy (https://github.com/Oliveriver/5d-diplomacy-with-multiverse-time-travel) mentioned in his breakdown on youtube that the primary reason he didn't setup hosting for it is because he'd have gone bankrupt if enough people all decided to play his game. If he were in a position to charge per game, he might have done it, and anyone could.

So, a solo director using current AI video, voice, and music tools could create a 90 minute film, about anything they want, at quality comparable to the best movies of all time, for between $3900 and $8500 spent on AI services, and between 3-6 months of compute time. That's including 100 generated options for each individual scene, voice training for an entire cast of AI characters, and music made specifically for each moment.

No additional tech, nothing still needing to be developed. It can be done right now, by anyone with a vision and the drive.

What a timeline.

Replying to Avatar Shawn

To ChatGPT: Summarize this in a paragraph, then draw out the key points of concern in terms of individual liberty and privacy.

https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689

____

Summary of the AI Act (Regulation (EU) 2024/1689)

The Artificial Intelligence Act (Regulation (EU) 2024/1689) establishes harmonized rules for the development, market placement, and use of AI systems within the European Union. The regulation aims to ensure that AI adheres to Union values, fundamental rights, and democracy while fostering innovation. It introduces a risk-based approach, classifying AI into categories such as prohibited AI, high-risk AI, and general-purpose AI, with varying degrees of regulation and oversight. The law bans certain AI practices deemed unacceptable, including subliminal manipulation, biometric categorization, social scoring, and mass surveillance. High-risk AI systems—such as those used in law enforcement, hiring, credit scoring, and critical infrastructure—must meet strict transparency, safety, and data governance requirements. The regulation also introduces transparency obligations for AI systems interacting with humans, generating deepfakes, or influencing significant decisions. While the regulation ensures AI market harmonization, it raises concerns regarding its impact on innovation, individual privacy, and potential state overreach in surveillance and law enforcement applications.

Key Concerns for Individual Liberty and Privacy

1. Mass Surveillance & Biometric Identification

• The Act restricts but does not outright prohibit real-time remote biometric identification in public spaces for law enforcement. While requiring judicial or independent oversight, exceptions for “urgent cases” create loopholes that could enable mass surveillance.

• The collection of biometric data, such as face recognition and gait analysis, even for seemingly benign purposes, increases the risk of pervasive tracking and profiling.

2. AI-Powered Social Scoring

• Public and private entities are banned from implementing AI-based social scoring that evaluates individuals based on behavioral data across different contexts. However, the Act still allows for some risk assessments, such as fraud detection or credit scoring, that could lead to discrimination.

3. Subliminal & Manipulative AI

• The regulation prohibits AI systems that manipulate individuals without their knowledge, particularly those that exploit vulnerabilities due to age, disabilities, or economic conditions. However, AI-driven persuasive technologies, especially in advertising and recommendation systems, remain partially regulated, leaving room for potential exploitation.

4. Emotion Recognition & Workplace Surveillance

• AI systems designed to infer emotions are banned in education and workplace settings, but their use in customer service, law enforcement, and marketing remains unaddressed. This raises concerns over emotional profiling, predictive behavior analytics, and AI-driven bias in human interactions.

5. Law Enforcement & Predictive Policing

• The Act prohibits AI systems that predict criminal behavior based solely on profiling (e.g., race, socio-economic background) but does not rule out data-driven predictive policing based on statistical risk assessments. This raises concerns over AI-driven discrimination and presumption of guilt.

6. Facial Recognition Database Creation

• The Act bans the untargeted scraping of facial images from the internet or CCTV footage for database creation. However, it does not fully prevent law enforcement from using third-party or privately compiled databases, leaving open the possibility of backdoor access to personal data.

7. Cross-Border Data Use & Third-Country AI

• AI systems developed outside the EU but used within are covered under the regulation, preventing regulatory arbitrage. However, enforcement mechanisms against foreign AI providers remain unclear, raising concerns about how non-EU companies handling EU citizen data will be held accountable.

8. Regulatory Impact on Innovation & AI Development

• The strict compliance burdens on high-risk AI applications may stifle innovation, particularly for startups and smaller AI firms. The additional compliance requirements could lead to market consolidation, where only large corporations can afford to comply with regulations.

Conclusion:

While the AI Act aims to safeguard fundamental rights, privacy, and individual freedoms, its exceptions for law enforcement, lack of absolute bans on biometric surveillance, and vague criteria for high-risk AI systems raise concerns. The regulation attempts to balance innovation and control, but its real-world implementation will determine whether it truly protects civil liberties or creates new avenues for state and corporate overreach.

AI naturally democratizes, since it's just math anyone who wishes to study can learn about, the costs keep going down, and the best move is clearly to take advantage of open source discoveries.

They will try to control it, but will find it as difficult as stopping bitcoin.

Canadians committed so many war crimes during WWII there remains a running joke in some circles that they see the Geneva Convention rules as a checklist. So yeah, they go real hardcore.

Ah TikTok. Right result, wrong reasons.

While it's unlikely it was intended as such, China now uses TikTok as a psychological opium. Proof is in how it works within their own borders. No dancing, no thirst traps, no random junk with no merit. Chinese users see patriotic videos, educational content, and are limited in how long they can be on it each day. And the addiction power of Tiktok overseas is so strong people are already in withdrawal after less than an hour.

US Gov did it to push people toward Meta, for money, for censorship. They still are all scum. Doesn't mean the result is a bad thing.