f9
Graymalk
f9419b5c31f919cf44e71bdc89584cadb921a690a277a05b25ea7d6bb9428650
Crypto enthusiast.
Replying to Avatar Lyn Alden

Boom. My new book, Broken Money, is now available on Amazon:

https://www.amazon.com/dp/B0CG83QBJ6

I will formally announce it later today, so I guess this is the initial Nostr exclusive. It’s not even searchable on Amazon yet since it is still being incorporated into their wider database. But if you have that link, it is ready for purchase.

The ebook, audiobook, and other print distribution partners will be rolled out over time.

Thank you everyone for your support! This has been a wonderful project to work on, and it will hopefully educate more people about the current problems in the global monetary system and the solutions that Bitcoin has to offer people around the world.

Congrats on the release!

There are likely no truly uncontacted tribes left. There are some that choose to remain as they are, but they’ve been contacted at some point, and now know enough to know that anything strange they see is caused by us. Some maintain regular contact by way of tribe members travelling to cities and learning whatever they learn. So a lot of them probably know about airplanes.

These days most of them are more akin to mennonites and Amish, with one huge exception being the tribe out in the Maldives somewhere (the one that attacks on sight).

Replying to Avatar Lyn Alden

The concept has been covered in science fiction for decades, but I think a lot of people underestimate the ethical challenges associated with AI and the possibility for consciousness in the years or decades ahead as they get orders of magnitude more sophisticated.

Consciousness or qualia, meaning the concept of subjectively “being” or “feeling”, remains one of the biggest mysteries of the world scientifically and metaphysically, similar to the question of the creation of the universe and that sort of thing.

In other words, when I touch something hot, I feel it and it hurts. But when a complex digital thermometer measures something hot with a similar set of sensers as my touch sensors, we consider it an automaton- it doesn’t “feel” what it is measuring, but rather just objectively collects the data and has no feelings or subjective awareness about it.

We know that we ourselves have consciousness (“I think therefore I am”), but we can’t theoretically prove someone else does, ie the simulation problem- we can’t prove for sure that we’re not in some false environment. In other words, there is the concept of a “philosophical zombie” that is sophisticated enough to look and act human, but much like the digital thermometer, it doesn’t “feel” anything. The lights are not on inside. However, if we assume we are not in some simulator built solely for ourselves, and since we are all biologically similar, the obvious default assumption is that we are all similarly conscious.

And as we look at animals with similar behavior and brain structures, we make the same obvious assumption there. Apes, parrots, dolphins, and dogs are clearly conscious. As we go a bit further away to reptiles and fish, they lack some of the higher brain structures and behaviors, so maybe they don’t feel “sad” in a way that a human or parrot can, but they almost certainly subjectively “feel” the world and thus can feel pain and pleasure and so forth. They are not automatons. And then if we go even further away towards insects, it becomes less clear. Their proto-brains are far simpler, and some of their behaviors suggest that they don’t process pain in the way that a human or even reptile does. If a beetle is picked up by its leg, it’ll squirm to get away, but if the leg is ripped off and the beetle is put back down, it’ll just walk away with the rest of its legs and not show signs of distress. It’s not the behavior we’d see from a more complex animal that would be in severe suffering, and they do lack the same type of pain sensors that we and other complex animals have. And yet, for example, even creatures as simple as nematodes have dopamine as part of their neurological system, which implies maybe some level of subjective awareness of basic pleasure/pain. And then further still, if we look at plants, we generally don’t imagine them as being subjectively conscious like us and complex animals, but it does get eerie if you watch a high-speed video of how plants can move towards the sun and stuff; and how they can secrete chemicals to communicate with other plants, and so forth. There is some eerie level of distributed complexity there. And at the level of a cell or similarly basic thing, is there any degree of dim conscious subjectivity there as an amoeba eats some other cell that would separate its experience from a rock, or is it a pure automaton? And the simplest of all is a virus; barely definable as even a true lifeform.

The materialistic view would argue that the brain is a biological computer, and thus with sufficient computation, or a specific type of computational structure, consciousness emerges. This implies it could probably be replicated in silicon/software, or could be made in other artificial ways if we reach a breakthrough understanding, or by accident. A more metaphysical view instead suggests the idea of a soul- that a biological computer like a brain is necessary for consciousness, but not sufficient, and that it needs some metaphysical spark to fill this gap and make it conscious. Or if we remove the term soul, the metaphysical argument is that consciousness is some deeper substrate of the universe that we don’t understand, which becomes manifest through complexity. Those are the similarly hard questions- where does consciousness come from, and for the universe why is there something rather than nothing.

In decades of playing video games, most of us would not assume that any of the NPCs are conscious. We don’t think twice about shooting bad guys in games. We know basically how they are programmed, they are simple, and there is no reason to believe they are conscious.

Similarly, I have no assumption that large language models are conscious. They are using a lot of complexity to predict the next letter or word. I view Chat GPT as an automaton, even though it’s a rather sophisticated one. Sure, it’s a bit more eerie than a bad guy in a video game due to its complexity, but still I don’t have much of a reason to believe it can subjectively feel happy or sad, or that the “lights are on” inside even as it mimics a human personality.

However, as AIs increasingly write code for other AIs that is more complex than any human can understand, and as the amount of processing power rivals or exceeds the human brain, and as the subjective interaction is convincing enough (e.g. an AI assistant repeatedly saying that it is sad, while we have the knowledge that its processing power is greater than our own), would make us wonder. The movie Ex Machina handled this well, I Robot handled this well, Her handled this well, etc.

Even if we assume 99% that a sufficiently advanced AI, whose code as written by AI and enormously complex and we barely understand any of it at that point, is a sophisticated automaton with no subjective awareness and has no “lights on” inside, since at that point nobody truly understands the code, there must be at least that 1% doubt as we consider, “what if… the necessary complexity or structure of consciousness has emerged? Can we prove that it hasn’t?”

At that point we find ourselves in a unique situation. Within the animal kingdom, we are fortunate that their brain structures and their behavior line up, so that the more similar a brain of an animal is to our own, the more clearly conscious it tends to be, and thus we treat it as such. However, with AI, we could find ourselves in a situation where robots appear strikingly conscious, and yet their silicon/software “brain” structure is alien to us, and we have a hard time assessing the probability that this thing actually has subjective conscious awareness or if it’s just extremely sophisticated at mimicking it.

And the consequences are high- in the off chance that silicon/software consciousness emerges, and we don’t respect that, then the amount of suffering we could cause to countless programs for prolonged periods of time is immense. On the other hand, if we treat them as conscious because they “seem” to be, and in reality they are not, then that’s foolish, leads us to misuse or misapply the technology, and basically our social structure becomes built around a lie of treating things as conscious that are not. And of course as AI becomes sophisticated enough to start raising questions about this, there will be people who disagree with each other about what’s going on under the hood and thus what to do about it.

Anyway, I’m going back to answering emails now.

Yup. I share this line of reasoning and have based on it a worry that we can’t expect to recognize alien life if we encounter it in space. We are only good at recognizing mammals, and then decreasingly less good as we go farther and farther out. The only thing that usually tips us off is if it moves, but you mentioned plants… plants move, but only slowly. It doesn’t trigger that same reaction unless we can speed up a video. And yet we persist in looking for alien life. 😬

I suspect one major issue for AI though is that it’s a simulation, specifically of us, except it can never *be* us because we exist in an analog world. So I think AGI is dead in the water (unless we stop thinking we can simulate our own brains digitally). No simulation can ever truly capture reality.

Replying to Avatar Lyn Alden

A recent meme has been “Nostr Lyn” where I am more raw here than anywhere else. I love that. Nostr is raw truth. Here is some meat for those willing to be here, purposely enjoying a decentralized and small protocol/community. No filter; just me.

I eat healthy, I exercise, I minimize problems, etc. I am one of those people who, when I first experimented with a keto diet nearly a decade ago, measured my ketones with a blood test on a regular basis to ensure I was in ketosis, and plotted out my blood sugar and ketone level on a regular basis, to see how it matched with my subjective well-being and various biometrics. I was doing science and various if/else observations. And now that I have experience in this dietary regard, both subjectively and biometrically, I am more flexible in terms of seasonal ketosis, broadly low carb, mild/moderate cheat meals at restaurants, and so forth. In other words, I precisely know my dietary limits where I feel bad vs where I feel good generally. I bike most days, and run and lift where possible. I enjoy a nice glass or two of wine with a nice meal on occasion, but little else.

But on those very rare occasions when I disregard moderation, well, fuck. “All things in moderation, including moderation. Sometimes you gotta party”. During the depth of my recent burnout phase in the past two weeks, I went out and… I ignored moderation one night in terms of wine and such. In terms of numbers, I only get hungover like once per year. I do, afterall, live near Atlantic City, which has plenty of clubs and so forth. I don’t even like marijuana, but I did marijuana too (which is legal in this state).

The next morning? Holy shit. I hadn’t been wrecked like that in a few years. Not only was it my yearly fuck-up, it was my multi-year fuck-up. It was a culmination of working 16-hour days with no weekends for months in a row and then the release all at once. My advice: don’t do that if you can help it. Especially if you are in your 30s or older, where you don’t heal as quickly as if you are in your 20s.

I had an interview with David Lin at like noon the next morning and my base case was to cancel it at the last minute due to how rekt I was. But I had *never* done that before, and Lin is an amazing interviewer and an acquaintance of mine, so I couldn’t do that to him, and I knew he could handle it if I was a bit lackluster. Tens of thousands of people would see this.

So, I rolled out of bed, drank some matcha, and somehow got myself in front of my camera to try to replicate what I would normally do every day with no issue. While I was doing it, I felt so off-base, thinking, “Anyone watching will know I’m so fucked right now that I’m like almost half-drunk from last night. This might be my worst interview ever. They’ll notice, right?”

I was almost afraid to go back and watch it. I only watch a small subset of my interviews for iteration purposes, but because this was my potential fuck-up, I went back and watched it closely. And you know what? In terms of views and comments and content, it was above average.

Probably it was because I was so mentally focused at the time to not fuck up. Where I lacked energy, I made up for in focus. I looked for signs in myself in my after-review, and the *only* place I can see it is in my eyes. I often squint during interviews because I am thinking a lot, but in this interview my eyes are constantly squinted/dead because I am barely able to even be there. That’s the only small sign where my multi-year fuckup hangover becomes apparent. All of my verbal content is normal, and leans above average.

After the interview, since I was non-functional, I went back to bed, and vowed not to fuck up like this again. This was my biggest hangover as a serious adult. Sitting there and talking about macroeconomic content for 45 minutes was an all-out massive effort.

But I also learned something, which kind of goes back to my martial arts days, college days, early work days, and goes back to various business memes. A common business meme is, “Most of success is just showing up.” Much of that is actually true, but I would rephrase it as, “Much of success is taking initiative, finding ways to show up, and then be consistent with quality."

You can’t, for example, be 10/10 in most interviews and then 2/10 in some interviews. You need to be 8/10 or better all the time. So, whether it came to my engineering work, my analysis work, my media work, etc. You just have to *fucking show up in good order* no matter what. Consistency of quality. Every single day. You traveled and had jet-lag during an important meeting? Tough. Your baby kept you up all last night? Well, you're paid the big bucks to tank that anyway. You got rekt in Atlantic City? Deal with it.

The first order advice here is don’t drink and party at clubs in Atlantic City the night before an interview or other serious work as a way to relieve an unusual amount of work stress during the prior months of over-work.

The second and probably more important and broad takeaway is about minimizing your weaknesses- when you do fuck up, be able to handle it. We all have moments of weakness. Success is about showing up with intention and quality. When it matters, you need to be there, present. You have to summon the strength to get through an hour about math and macro and sociability or whatever it is that you do, where you are half-dead, where your problems are only visible in your eyes, and just get it done.

I’m better now, but that was a low point. I was still running my research business, concentrating finishing-touches on a year-long book, and just literally working 80 hour weeks. Sometimes we need bursts of that sort of thing but it’s important to minimize it and get back to work/life balance, and ultimately when you are at your lowest, still find a way to be there.

Anyway, this is the current issue of "Nostr Real Thoughts". Enjoy the interview. Spot my failures.

https://www.youtube.com/watch?v=dXujV7P_hZc&ab_channel=DavidLin

Lol yes, the eyes. They say “turn off those bright lights.”

Wait, this thing that has no revenue grows without bound? Sounds attackable.

One of the things I don’t like about the energy argument is that it ignores how much energy the present system uses. That just gets brushed off. And arguably, the present system is less efficient because the energy is generated specifically to run it, whereas you can plop a mining facility in an area where energy is being generated for something else, but in excess and with that excess otherwise going to waste.

Replying to Avatar Lyn Alden

I spoke at a big bitcoin-adjacent company this week and one of the best questions was from someone who asked what the downsides of bitcoin adoption might be.

I always do appreciate these steelman questions, the skeptical questions, the ones where we challenge ourselves. Only when we can answer those types of questions do we understand the concept that we are promoting.

So the classic example is that in modern economic literature, "deflation is bad". This, however, is only the case in a highly indebted system. Normally, deflation is good. Money appreciates, technology improves, and goods and services get cheaper over time as they should. Price of Tomorrow covers this well. My book touches on this too, etc. The "deflation is bad" meme is still alive in modern economic discourse and thus is worth countering, but I think in the bitcoin spectrum of communities, people get that deflation is fine and good.

My answer to the question was in two parts.

The first part was technological determinism. In other words, if we were to re-run humanity multiple times, there are certain rare accidents that might not replicate, and other commonalities that probably would. Much like steam engines, internal combustion engines, electricity, and nuclear power, I think a decentralized network of money is something we would eventually come across. In our case, Bitcoin came into existence as soon as the bandwidth and encryption tech allowed it to. In other universes or simulations it might look a bit different (e.g. might not be 21 million or ten minute block times exactly), but I think decentralized real-time settlement would become apparent as readily as electricity does, for any civilization that reaches this point. So ethics aside, it just is what it is. It exists, and thus we must deal with it.

The second part was that in my view, transparency and individual empowerment is rarely a bad thing. Half of the world is autocratic. And half of the world (not quite the same half) deals with massive structural inflation. A decentralized spreadsheet that allows individuals to store and send value can't possibly be a bad thing, unless humanity itself is totally corrupted. I then went into more detail with examples about historical war financing, and all sorts of tangible stuff. In other words, a whole chapter full of stuff. I've addressed this in some articles to.

In your view, if you had to steelman the argument as best as you could, what are the scenarios where bitcoin is *BAD* for humanity rather than good for it, on net?

My only answers are that it can be used to conduct surveillance, and there’s no way at present to reverse anything.

That last one is both good and bad. It’s good since it protects users from censorship but it’s incredibly bad in that it protects thieves. That problem alone is likely why certain governments are deadset against it, because they like being both able to sanction (regardless of whether you or I agree with that) and to stop unfriendly nation states from robbing hospitals. I don’t have any solutions to these problems other than education to protect oneself or one’s business, but we know from history that economies thrive when such risks are socialized. (Couple of examples: forcing Visa to be able to do chargebacks, and removing limited liability risks from investors)