Avatar
Mike Brock
b9003833fabff271d0782e030be61b7ec38ce7d45a1b9a869fbdb34b9e2d2000
Unfashionable.

It has nothing to do with regulations. It has everything to do with incentive structures in organized societies. There are colleagues I have that openly believe in the thesis that all money except for bitcoin will die. Our status as a publicly traded company does not impose any restriction on them saying that. And if I believed that was the most likely outcome, I would feel no compunction to say it, either.

I think Bitcoiners may be surprised that no matter how successful Bitcoin becomes, that state-issued currencies will continue to exist, and bitcoin will be more like a world-reserve currency, that central banks hold, and people use as an inflation hedge, and people resort to as freedom money in hostile environments.

This is a very real possibility as to what Bitcoin's future looks like.

Neither? šŸ¤·šŸ»ā€ā™‚ļø

Probably one of the most stress-filled birthdays I can remember having in a long time. There's always next year, though!

100%

LLM’s are weak at the frontiers of human knowledge because there is no training data there. For LLM’s the abyss is abyssal. Entirely void of anything, it doesn’t exist in the training data and so it doesn’t exist.

For LLM’s hypothesising from the realm of training data beyond the frontier of knowledge is usually fatal. LLM’s are limited to human conceptual space, which is a relatively very small space.

I do think LLM’s will allow extremely high competence within the domain of human knowledge at scale.

But I doubt they will be hypothesising new concepts or doing any real innovation sans prior patterns.

The system we built with the interferometry got to the point of requesting specific data (flow conditions) for conditions that it had stubbornly high error rates. Sometimes it would request adjacent data or data where results were seemingly good, but this was to achieve less error across the solution space.

This is a kind of self awareness that has escaped the media and the public. Our AI couldn’t talk but it knew it’s strengths and weaknesses and was curious about areas where it had shortfalls. It demanded specific data to achieve specific further learning objectives.

Chat-GPT doesn’t demand information to self improve, it merely responds to prompts.

There are lots of nuances across all these products and technologies that are totally missed by the public.

Having been through the loop, the main thing to get right is the design of the data schema, and the design of the sensory surface that generates that schema. Any dataset that was not specifically designed for the task of training a specific model is usually a major headwind.

Any future projects I do, I would almost always want to generate thoughtfully designed data, rather than do a project because some legacy dataset merely exists.

Maximising signal is always worth it.

We shouldn't underplay how disruptive LLMs are going to be for a LOT of industries. They do bring a lot of utility to the table, and engineers are going to find lots of insane and innovative ways to apply them.

That said, they are not on track towards being general intelligence in any respect. Which isn't even an indictment of them. Narrow AI applications will on their own, reinvent the human-computer interface. But the nuances of what is involved in replicating something like even comes close to emulating human cognition, interfaces with complexities that we have barely even scratched the surface of understanding in. In neuroscience, philosophy of mind, or in artificial intelligence research.

Keeping any complex system stable requires continuous adaptation to its environment. Even our bodies, whose phase space of stability, which we describe as homeostasis, is a series of complex feedback mechanisms to cope with environmental changes in temperatures, threats, injury, infectious pathogens, changes in availability of energy inputs (food availability, etc), and if all of these feedback mechanisms were not constantly changing to adjust for all these stresses, we could die very quickly.

This all adds up to your own metabolic processes being ā€œmetastableā€. They are stable given a certain set of parameters. But there is not set of parameters that is universally stable. There is no arrangement of these systems that could so be devised as to make us immortal in the face of an unbounded set of temperature ranges (because the behavior of chemical processes that keep us alive are temperature-dependent, which is why our body must maintain a constant temperature internally), nor could our body isolate itself from the entropy, or be able to survive below a certain minimum energy input.

Society and economic systems are exactly like this. Believing that we can formulate a perfectly stable economic system and set of norms and values that tracks close to some normative ideal, and this solution can be distilled down to a set of specific rules and abstractions (represented by some monetary theory, conception fo property rights, or some natural law concept of how to achieve optimal cooperation) is really, in my opinion, a fool’s errand. And I recognize I’m indicting a lot of people’s closely held political and economic beliefs when I say that. But I just happen to believe it’s true.

I also believe this kind of thinking can be outright dangerous, because it enhances people confidence that ā€œburning it all downā€ is a safe and/or desirable thing to do, because they are so certain of their solutions, and they are so certain they know what the problems are, that they’re not even open to the possibility that they could be wrong. So they’d be willing to take these giant risks for millions, if not billions of people — believing they’re emancipating them — without considering for a brief moment, that maybe their capacity to understand the equities of human interest, in aggregate, are not cognizable by any one political or economic theory.

This is why I never jump on any ā€œmaximalistā€ bandwagons in really anything in my life, to the great frustration of many in these conversations.

Cool. Then I think we’re very much on the same page, because that’s kind of the point I was getting at above. I’ve kind of gotten into this knee-jerk reaction that when people talk about AI doing physics and helping to advance scientific understanding, that what they’re really saying is that LLMs are going to do this. Which I think they’re absolutely not capable of driving advances in those arenas.

In the short-to-medium term I think it’s unlikely. In fact, I would say that this is exactly the kind of thing that I expect that large language models are probably going to be consistently incapable of being useful for — in any parameter space. Large because nothing we’ve done in the world of deep learning, reinforcement based learning systems, currently is able to manifest any emergent understanding of model-dependent reasoning. They are essentially linear prediction models, that probably do not lead to any insights in that direction.

That said, I expect that we will eventually figure out how to build AI that can function in that way. But it might actually be further into the future than most people expect. Even given the dramatic advances we’re seeing in generative AI right now. It’s possible we’re quickly approaching a local maximum in that sense.

Yeah. There is quite a bit of confusion in people’s understanding about how mathematics relates to the real world. Without getting into Wigner, I think the thing I would say is that, the best we can ever do, given what we actually know about the limits of our knowledge and our capacity to even come to know things through measurement, consistently demonstrates that objectivity in the way the average person thinks about it, simply does not exist. Which is a discomforting thought for most people. But it is also overwhelming a conclusion that the empirical evidence, and the models that we use to interpret that evidence, and successfully make predictions with, point to.