Avatar
PurpleSalamander
e5397b151acc0fd65b9a61b36913c6fedea7fe0a50020868d40d456e0748b8dc
I am not a cat.
Replying to Avatar Lyn Alden

The really sad thing about constant currency devaluation throughout the world is that it benefits those who have the most access to cheap credit at the expense of those who do not.

People on the lower end of the income spectrum in Egypt are generally dealing with cash. Their wages and savings get debased, and they own fewer hard assets, so they have fewer offsets. Their ability to buy imported goods diminishes, their ability to travel outside of Egypt diminishes, and their costs in general rise.

People on the higher end of the income spectrum in Egypt have access to foreign bank accounts and/or have multi-year domestic property loans, which are basically currency shorts. They mitigate the impact from the ongoing devaluation, or in some cases benefit by it.

As a tangible example, last year I took out a 7-year loan or currency short against the Egyptian pound, and used it to buy a villa that houses a 9-person extended family (7 when my husband and I are not there). The money supply goes up by 20% per year but my interest rate is 3%, which is an ultra-cheap long-term currency short. Within the first six months of the term, the Egyptian pound was already cut in half relative to the dollar. Who knows what the exchange rate will be over the next 6.5 years.

The past four decades of global Fiat World have been like a game of Blackjack. In Blackjack, you try to get to 21 without going over. In Fiat World, you get into a position so that you can take out low interest debt and buy scarcer assets with it, and you do that as much as you can but without going bust in a downturn. The major governments can do that the most. The big banks are next. The big funds and asset managers and public corporations are next. The wealthy individuals are next. Then the middle class, and then the working class and the poor.

And the global aspect of Fiat World is important. An asset allocator can operate globally, so among the 160+ currencies out there, they can arbitrage them. They short a currency with negative real interest rates, and buy good assets somewhere with it. I personally did it in Egypt not to make money (we don't plan to sell for the foreseeable future) but to ensure good housing for the family at modest expense. But global firms do this around the world to make money. As the global economy got more and more financialized over the past four decades, a lot of professional talent shifted from engineering or medicine or wherever else and moved toward financial arbitrage.

Weak money financializes everything else. Strong money can de-financialize everything else.

https://podcastaddict.com/bitcoin-audible/episode/169468590

Great episode by nostr:npub1jwajw6v62908gdmpvpktrnsm84cav6zr3wz4djug6k3lhsfw2rvqmzrvzv

Replying to Avatar Lyn Alden

“We should change Bitcoin now in a contentious way to fix the security budget” is basically the same tinkering mentality that central bankers have.

It begins with an overconfident assumption that they know fees won’t be sufficient in the future and that a certain “fix” is going to generate more fees. But some “fixes” could even backfire and create less fees, or introduce bugs, or damage the incentive structure.

The Bitcoin fee market a couple decades out will primarily be a function of adoption or lack thereof. In a world of eight billion people, only a couple hundred million can do an on chain transaction per year, or a bit more with maximal batching. The number of people who could do a monthly transaction is 1/12th of that number. In order to be concerned that bitcoin fees will be too low to prevent censorship in the future, we have to start with the assumption that not many people use bitcoin decades out.

Fedwire has about 100x the gross volume that Bitcoin currently does, with a similar number of transactions. What will Bitcoin’s fee market be if volumes go up 5x or 10x, let alone 50x or 100x? Who wants to raise their hand with a confident model of what bitcoin volumes will be in 2040?

What will someone pay to send a ten million dollar equivalent on chain settlement internationally? $100 in fees per million dollar settlement transaction would be .01%. $300 to get it in a quicker block would be 0.03%. That type of environment can generate tens of billions of dollars of fees annually. The fees that people pay to ship millions of dollars of gold long distances, or to perform a real estate transaction worth millions of dollars, are extremely high. Even if bitcoin is a fraction of that, it would be high by today’s standards. And in a world of billions of people, if nobody wants to pay $100 to send a million dollar settlement bearer asset transaction, then that’s a world where not many people use bitcoin period.

In some months the “security budget” concern trends. In other months, the “fees will be so high that only rich people can transact on chain” concern trends. These are so wildly contradictory and the fact that both are common concerns shows how little we know about the long term future.

I don’t think the fee market can be fixed by gimmicks. Either the network is desirable to use in a couple decades or it’s not. If 3 or 4 decades into bitcoin’s life it can’t generate significant settlement volumes, and gets easily censored due to low fees, then it’s just not a very desirable network at that point for one reason or another.

Some soft forks like covenants can be thoughtfully considered for scaling and fee density, and it’s good for smart developers to always be thinking about low risk improvements to the network that the node network and miners might have a high consensus positive view toward over time. But trying to rush VC-backed softforks, and using security budget FUD to push them, is pretty disingenuous imo.

Anyway, good morning.

Replying to Avatar Lyn Alden

The concept has been covered in science fiction for decades, but I think a lot of people underestimate the ethical challenges associated with AI and the possibility for consciousness in the years or decades ahead as they get orders of magnitude more sophisticated.

Consciousness or qualia, meaning the concept of subjectively “being” or “feeling”, remains one of the biggest mysteries of the world scientifically and metaphysically, similar to the question of the creation of the universe and that sort of thing.

In other words, when I touch something hot, I feel it and it hurts. But when a complex digital thermometer measures something hot with a similar set of sensers as my touch sensors, we consider it an automaton- it doesn’t “feel” what it is measuring, but rather just objectively collects the data and has no feelings or subjective awareness about it.

We know that we ourselves have consciousness (“I think therefore I am”), but we can’t theoretically prove someone else does, ie the simulation problem- we can’t prove for sure that we’re not in some false environment. In other words, there is the concept of a “philosophical zombie” that is sophisticated enough to look and act human, but much like the digital thermometer, it doesn’t “feel” anything. The lights are not on inside. However, if we assume we are not in some simulator built solely for ourselves, and since we are all biologically similar, the obvious default assumption is that we are all similarly conscious.

And as we look at animals with similar behavior and brain structures, we make the same obvious assumption there. Apes, parrots, dolphins, and dogs are clearly conscious. As we go a bit further away to reptiles and fish, they lack some of the higher brain structures and behaviors, so maybe they don’t feel “sad” in a way that a human or parrot can, but they almost certainly subjectively “feel” the world and thus can feel pain and pleasure and so forth. They are not automatons. And then if we go even further away towards insects, it becomes less clear. Their proto-brains are far simpler, and some of their behaviors suggest that they don’t process pain in the way that a human or even reptile does. If a beetle is picked up by its leg, it’ll squirm to get away, but if the leg is ripped off and the beetle is put back down, it’ll just walk away with the rest of its legs and not show signs of distress. It’s not the behavior we’d see from a more complex animal that would be in severe suffering, and they do lack the same type of pain sensors that we and other complex animals have. And yet, for example, even creatures as simple as nematodes have dopamine as part of their neurological system, which implies maybe some level of subjective awareness of basic pleasure/pain. And then further still, if we look at plants, we generally don’t imagine them as being subjectively conscious like us and complex animals, but it does get eerie if you watch a high-speed video of how plants can move towards the sun and stuff; and how they can secrete chemicals to communicate with other plants, and so forth. There is some eerie level of distributed complexity there. And at the level of a cell or similarly basic thing, is there any degree of dim conscious subjectivity there as an amoeba eats some other cell that would separate its experience from a rock, or is it a pure automaton? And the simplest of all is a virus; barely definable as even a true lifeform.

The materialistic view would argue that the brain is a biological computer, and thus with sufficient computation, or a specific type of computational structure, consciousness emerges. This implies it could probably be replicated in silicon/software, or could be made in other artificial ways if we reach a breakthrough understanding, or by accident. A more metaphysical view instead suggests the idea of a soul- that a biological computer like a brain is necessary for consciousness, but not sufficient, and that it needs some metaphysical spark to fill this gap and make it conscious. Or if we remove the term soul, the metaphysical argument is that consciousness is some deeper substrate of the universe that we don’t understand, which becomes manifest through complexity. Those are the similarly hard questions- where does consciousness come from, and for the universe why is there something rather than nothing.

In decades of playing video games, most of us would not assume that any of the NPCs are conscious. We don’t think twice about shooting bad guys in games. We know basically how they are programmed, they are simple, and there is no reason to believe they are conscious.

Similarly, I have no assumption that large language models are conscious. They are using a lot of complexity to predict the next letter or word. I view Chat GPT as an automaton, even though it’s a rather sophisticated one. Sure, it’s a bit more eerie than a bad guy in a video game due to its complexity, but still I don’t have much of a reason to believe it can subjectively feel happy or sad, or that the “lights are on” inside even as it mimics a human personality.

However, as AIs increasingly write code for other AIs that is more complex than any human can understand, and as the amount of processing power rivals or exceeds the human brain, and as the subjective interaction is convincing enough (e.g. an AI assistant repeatedly saying that it is sad, while we have the knowledge that its processing power is greater than our own), would make us wonder. The movie Ex Machina handled this well, I Robot handled this well, Her handled this well, etc.

Even if we assume 99% that a sufficiently advanced AI, whose code as written by AI and enormously complex and we barely understand any of it at that point, is a sophisticated automaton with no subjective awareness and has no “lights on” inside, since at that point nobody truly understands the code, there must be at least that 1% doubt as we consider, “what if
 the necessary complexity or structure of consciousness has emerged? Can we prove that it hasn’t?”

At that point we find ourselves in a unique situation. Within the animal kingdom, we are fortunate that their brain structures and their behavior line up, so that the more similar a brain of an animal is to our own, the more clearly conscious it tends to be, and thus we treat it as such. However, with AI, we could find ourselves in a situation where robots appear strikingly conscious, and yet their silicon/software “brain” structure is alien to us, and we have a hard time assessing the probability that this thing actually has subjective conscious awareness or if it’s just extremely sophisticated at mimicking it.

And the consequences are high- in the off chance that silicon/software consciousness emerges, and we don’t respect that, then the amount of suffering we could cause to countless programs for prolonged periods of time is immense. On the other hand, if we treat them as conscious because they “seem” to be, and in reality they are not, then that’s foolish, leads us to misuse or misapply the technology, and basically our social structure becomes built around a lie of treating things as conscious that are not. And of course as AI becomes sophisticated enough to start raising questions about this, there will be people who disagree with each other about what’s going on under the hood and thus what to do about it.

Anyway, I’m going back to answering emails now.

Me, after reading this note: