Avatar
Blake
b2dd40097e4d04b1a56fb3b65fc1d1aaf2929ad30fd842c74d68b9908744495b
#Bitcoin #Nostr #Freedom wss://relay.nostrgraph.net

If I look at my LinkedIn today, I’d wager 20% bots. Twitter none as I stopped using it - likely similar.

I think a single verification weak link is best case, and dozens is more reality. My Nostr network reach is 20,000 (2nd degree) and my 2nd degree following is 7,000. Each of those have their own 10,000s of possible weak links.

For small networks like Scuba Divers Egypt, sure, it can work. For larger social networks it would seem unlikely to work - especially when it can happen over months or even years - computers have infinite patients.

It the root issue is still that I can be impacted so significantly by mis-steps from my trusted network. And the ability to climb a network trust is actually very trivial if it has incentives.

Think of a Google maps business that gives a small gift and asks for 5 stars for a special discount or photos of your activity. It can all be gamed - and trust ranks breached by manipulation. NPS is also easily manipulated by businesses when they have KPIs and bonuses tied to them.

I’m not suggesting KYC or human verification.

I’m suggesting that we need to add a cost for machines to increase the expense of computer generated content - however humans would also have to incur that same cost - however at scale, humans post less and have an order of magnitude less expensive.

Or another solution. I don’t have a clear picture of exactly what will work best.

The expense will down trend for cost and power of machines and thus content. Human brains however are effectively stable and have physical limits that aren’t increasing.

I can imagine even with a 10% hit rate, perhaps using public records like birth certificates or similar, there will still be economical benefit for the bots/targeted ML. Whatever a human can do in the digital world, a simulation can do better, faster, and cheaper.

I’ve read and appreciate brugeman’s approach and research. Spam typically always has a call to action - however when you can instead hyper-target, again it’s death by 1,000 cuts.. most or all don’t have a link or product they are trying to sell you. It will be selling instead a way of thinking, a reality that is one they benefit from (they being whoever breaks even or makes money).

I have a Nostr relay too and my spam ML engine hasn’t been updated in training for 5 months now. It has 99.999% accuracy or abouts. Spam is different from targeted content.

Consider a Wikipedia page with two versions and one promotes euthanasia in the death section for suicides and the other condemns and highlights doctors are mass murders. Imagine Wikipedia shows you the version it wants you to side with. It’s very different problem from spam detection.

Well said. I think we’re always being influenced or manipulated in a way. The new adjustment is by just how much and just how hard it will be to identify using human cognition (or even research).

Brains are limited in power and time - biology is slow. We’re crossing that biological limit this decade.

Some good thoughts.

Perhaps a similar approach is staked Bitcoin (why similar? It adds a cost) that doesn’t move could be used. However there would then be a market selling UTXOs of hodlers. Even renting this would be 10-100x cheaper than buying that Bitcoin.

The issue with web of trust is that your weakest link is always the easiest route. Computers don’t care if it takes months to progress. And the next issue is even closed/small highly verified networks don’t cater for the wider audience social media we largely use today. What if it was your grandma who trusted the bad actor? Bye grandma?

The next issue is just like world coin using eye balls to ‘verify humans’, people will sell their human verification - or identities of the dead get to live again.

Another critical attack is the same as above, however once they have a significant portion of your content you see daily - they bait and switch.

This attack is malicious too, as now all those environmental people you follow and listen to, the profiles have flipped and are now either harassing you directly - or, pitching you content to swing your opinion, like nuclear is bad and coal can be clean.

It doesn’t have to be a hard cut over either. It can be over time and more sneaky.

Anyway.. online and social is about to become 1,000,000+ to 1 machines to humans - and since text content doesn’t help machines, the content the machines spit out will solely be to control you with alternative motives - run by other humans.

The information you now ingest can be targeted and malicious - yet hidden and subliminal. You have let your guard down and you are now suckling the teet of deception and malicious. A botnet 2.0 targeting individuals instead of network congestion.

You’ve been infiltrated. Your web of trust network has been corrupted. You are both a victim and pawn.

I’ve not only just described online advertising today, Reddit and Twitter, but the hyper-targeted content that’s coming. We currently have no clear defence - technical, social, biological or legal.

Governments seem to think laws against ‘misinformation’ prevent this. They do not. They will fail - if not only for the fact definitions of misinformation vary by content and individual, and government moderation or censorship is doomed to be rejected.

How do I know this is possible - because with patients (and a lack of morals), I could build this. If I can, someone else can.

What if all your posts were ingested by a ML engine and it was asked to create 1,000 profiles, each with a history of posts, targeting topics you care about or your existing contacts do. They can post daily and rehash other content.

Then it was asked to connect with as many of your contacts as possible, using topics they are passionate about. Perhaps work, environment, politics, hobbies or activities. Perhaps recent news topics and events you care about - liked, reposted, reported, shared, wrote.

Now your network has been infiltrated with fake profiles that all appear to be like minded individuals, each leading their own lives and sharing information you that informs you. They could make up 10-30% of your daily content.

Perhaps GPT-4 is largely marketing and hype. It’s nice and all - yet over sold. More over-sold is the ‘speed’ or rate of innovation.

Again, AI’s (I still call it ML) greatest threat to people in the near term is hyper-targeting and malicious actors who can now mass produce human-enough content cheaply - that for all intensive purposes passes enough of a turing test to appear human generated.

Our brains effort required to critically dissect and identify as ML/Bot created has crossed a significant barrier - for anything we read now, it takes say two times as long to evaluate it’s origin/intent/source. Previously it was a fraction - easy to identify compared to the content itself.

Death by 1,000 cuts is the new reality. Not entirely new, as mass media already does this; however they largely failed at digital (poor revenue). Now it’s possible to make money other ways through similar manipulation digitally - again, with hyper-targeting.

The only way I know of adding a cost to the targeted content is (burnt) proof of work. Seems wasteful - yet I don’t know another way to combat the catastrophic drop in price to generate ML. Humans would pay this cost too - yet humans don’t mass produce, so it ideally should mostly negatively impact mass generated content.

https://archive.fo/FlBgV

Centralisation failure. Both government and communications. nostr:note1tcqd0qufc8mryfwh9uwac56vmendttq689ts2ygrakysaw00kwxsffse8w

And if you want just a single example, look at how useless central banks are at predicting and protecting (stabilising) against recessions/depressions - their sole stated goal.

Most of these systems of democracy exist to control what they obviously do not and can not. Why?

Simply put in Sapiens, by Yuval Noah:

There are two main classifications of chaos.

First Order Chaos doesn’t respond to prediction. The example he gave is the weather. If you predict the weather to some level of accuracy that prediction will hold because the weather doesn’t adjust based on the prediction itself.

Second Order Chaos is infinitely less predictable because it does respond to prediction. Examples include things like stocks and politics.

Why so negative about democracy?

It’s time to be honest that it isn’t the end game state, and simply it doesn’t really function as it’s sold. It’s a system, and over time all system have their weakness exploited. Democracy fails along side capitalism and at country or ‘union’ scale. Democracy won by forceful colonisation - and it is less directly controlling for its members than alternatives.

I have a hunch the right direction is decentralising large (and small) parts again. It’s a coordination and communication problem - not to dissimilar to decentralised computer science problems, game theory and peer coordination.

I don’t have the solution - however, more decentralised, fewer (simpler) laws, more accountability, more transparency, better leaders, less corruption, public service is actually public service - not public beatdown or public theft. A lot would be different. It needs to self-heal or better prevent democracies failures at scale.

The illusion of democratic government is the illusion that they can control or impact what they can’t. For the most part, they are just along for a cyclical ride they have little real impact on. Voting is irrelevant in a self-correcting system.

They “tackle” symptoms and are unable to identify or understand root causes (or they know and it’s too costly, or longer term investment for their needs - solely to retain power), while blaming the last government, global events… basically anything.

Laws, regulations, and spending money (financing) broken initiatives isn’t effective. The cost to ROI isn’t understood (aka. over investment), the money is spent inefficiently (corruption and incompetence), and in the wrong places (symptoms, not root causes), and compounding side effects are never understood and are often counter productive.

Nostr clients should likely support a https/wss only mode - largely for media requests, but app wide.

Some countries spying and other MIM attacks can be avoided.

🤣 nostr:note1lm7p2wszaahfnaf32dld0t5uvpjmlcmv2wru42pdeldcrks5r3jshvk8jh

When “enhanced ad privacy” actually means “making more money off you, by default opting you into ad targeting metrics (including sharing your browsing history with sites) built directly into the browser”. 😳

In case anyone thinks the IMF announcing new funding for a country (like Ukraine) - and making it sound like a philanthropic gift - it’s actually debt with a repayment plan.

The IMF is no different to a loan shark, or collecting a royalty on all future financial recovery and growth - and takes decades to repay. Plus they seek extra control.. debt with hidden “we are the captain now” terms.

Plenty of books explain the IMF trap and global misdeeds if you’d like to learn more. https://en.m.wikipedia.org/wiki/Confessions_of_an_Economic_Hit_Man

Note: To be clear, I fully support assistance to rebuild countries after and during war. I don’t support the opportunistic parasitic ‘world funds’ that market themselves as saviours - yet are not close.

https://www.imf.org/external/np/fin/tad/extforth.aspx?memberkey1=993&category=forth&year=2011&trxtype=repchg&overforth=f&schedule=exp&extend=y