I definitely don't think Bitcoin will fail.
It has nothing to do with regulations. It has everything to do with incentive structures in organized societies. There are colleagues I have that openly believe in the thesis that all money except for bitcoin will die. Our status as a publicly traded company does not impose any restriction on them saying that. And if I believed that was the most likely outcome, I would feel no compunction to say it, either.
How do I not believe in a bitcoin future? The scenario I lay out, is basically bitcoin becoming the most important monetary asset in the world.
Yes, I'm betting they will adapt. Bitcoin will force them to.
Sure. It will literally be accepted almost universally, even when it's not the operative unit of account. But it may be seen as less economically efficient to use for exchange in many contexts.
All-of-nothing thinking is also a recipe for extreme disappointment.
I think Bitcoiners may be surprised that no matter how successful Bitcoin becomes, that state-issued currencies will continue to exist, and bitcoin will be more like a world-reserve currency, that central banks hold, and people use as an inflation hedge, and people resort to as freedom money in hostile environments.
This is a very real possibility as to what Bitcoin's future looks like.
Probably one of the most stress-filled birthdays I can remember having in a long time. There's always next year, though!
We shouldn't underplay how disruptive LLMs are going to be for a LOT of industries. They do bring a lot of utility to the table, and engineers are going to find lots of insane and innovative ways to apply them.
That said, they are not on track towards being general intelligence in any respect. Which isn't even an indictment of them. Narrow AI applications will on their own, reinvent the human-computer interface. But the nuances of what is involved in replicating something like even comes close to emulating human cognition, interfaces with complexities that we have barely even scratched the surface of understanding in. In neuroscience, philosophy of mind, or in artificial intelligence research.
Keeping any complex system stable requires continuous adaptation to its environment. Even our bodies, whose phase space of stability, which we describe as homeostasis, is a series of complex feedback mechanisms to cope with environmental changes in temperatures, threats, injury, infectious pathogens, changes in availability of energy inputs (food availability, etc), and if all of these feedback mechanisms were not constantly changing to adjust for all these stresses, we could die very quickly.
This all adds up to your own metabolic processes being āmetastableā. They are stable given a certain set of parameters. But there is not set of parameters that is universally stable. There is no arrangement of these systems that could so be devised as to make us immortal in the face of an unbounded set of temperature ranges (because the behavior of chemical processes that keep us alive are temperature-dependent, which is why our body must maintain a constant temperature internally), nor could our body isolate itself from the entropy, or be able to survive below a certain minimum energy input.
Society and economic systems are exactly like this. Believing that we can formulate a perfectly stable economic system and set of norms and values that tracks close to some normative ideal, and this solution can be distilled down to a set of specific rules and abstractions (represented by some monetary theory, conception fo property rights, or some natural law concept of how to achieve optimal cooperation) is really, in my opinion, a foolās errand. And I recognize Iām indicting a lot of peopleās closely held political and economic beliefs when I say that. But I just happen to believe itās true.
I also believe this kind of thinking can be outright dangerous, because it enhances people confidence that āburning it all downā is a safe and/or desirable thing to do, because they are so certain of their solutions, and they are so certain they know what the problems are, that theyāre not even open to the possibility that they could be wrong. So theyād be willing to take these giant risks for millions, if not billions of people ā believing theyāre emancipating them ā without considering for a brief moment, that maybe their capacity to understand the equities of human interest, in aggregate, are not cognizable by any one political or economic theory.
This is why I never jump on any āmaximalistā bandwagons in really anything in my life, to the great frustration of many in these conversations.
Cool. Then I think weāre very much on the same page, because thatās kind of the point I was getting at above. Iāve kind of gotten into this knee-jerk reaction that when people talk about AI doing physics and helping to advance scientific understanding, that what theyāre really saying is that LLMs are going to do this. Which I think theyāre absolutely not capable of driving advances in those arenas.
Would you agree that LLMs and deep learning more generally, has taken a lot of the wind out of the sails of a lot research in symbolic AI, that I think is actually on the right path towards AI helping us unlock scientific understanding.
In the short-to-medium term I think itās unlikely. In fact, I would say that this is exactly the kind of thing that I expect that large language models are probably going to be consistently incapable of being useful for ā in any parameter space. Large because nothing weāve done in the world of deep learning, reinforcement based learning systems, currently is able to manifest any emergent understanding of model-dependent reasoning. They are essentially linear prediction models, that probably do not lead to any insights in that direction.
That said, I expect that we will eventually figure out how to build AI that can function in that way. But it might actually be further into the future than most people expect. Even given the dramatic advances weāre seeing in generative AI right now. Itās possible weāre quickly approaching a local maximum in that sense.
Iām also under no illusions that getting people to believe or care about this fact is a realistic goal. People want things to be cut and dry. Black and white. True or false.
Yeah. There is quite a bit of confusion in peopleās understanding about how mathematics relates to the real world. Without getting into Wigner, I think the thing I would say is that, the best we can ever do, given what we actually know about the limits of our knowledge and our capacity to even come to know things through measurement, consistently demonstrates that objectivity in the way the average person thinks about it, simply does not exist. Which is a discomforting thought for most people. But it is also overwhelming a conclusion that the empirical evidence, and the models that we use to interpret that evidence, and successfully make predictions with, point to.

