💯
Discussion
Yeah. There is quite a bit of confusion in people’s understanding about how mathematics relates to the real world. Without getting into Wigner, I think the thing I would say is that, the best we can ever do, given what we actually know about the limits of our knowledge and our capacity to even come to know things through measurement, consistently demonstrates that objectivity in the way the average person thinks about it, simply does not exist. Which is a discomforting thought for most people. But it is also overwhelming a conclusion that the empirical evidence, and the models that we use to interpret that evidence, and successfully make predictions with, point to.
I’m also under no illusions that getting people to believe or care about this fact is a realistic goal. People want things to be cut and dry. Black and white. True or false.
It will be interesting to see if Machine Learning is able to brute force any deeper / fuller model of physics than the Standard Model.
Some of the exascale machines coming online will be looking at hypothesising new models of physics over the coming decade.
If that did bear fruit, it would be quite a thing.
Not sure how I would feel about that.
In the short-to-medium term I think it’s unlikely. In fact, I would say that this is exactly the kind of thing that I expect that large language models are probably going to be consistently incapable of being useful for — in any parameter space. Large because nothing we’ve done in the world of deep learning, reinforcement based learning systems, currently is able to manifest any emergent understanding of model-dependent reasoning. They are essentially linear prediction models, that probably do not lead to any insights in that direction.
That said, I expect that we will eventually figure out how to build AI that can function in that way. But it might actually be further into the future than most people expect. Even given the dramatic advances we’re seeing in generative AI right now. It’s possible we’re quickly approaching a local maximum in that sense.
Mike,
I personally built an AI system that used laser interferometry data from bouncing an IR laser of the external wall of a pipe… that was able to accurately predict multiphase flow conditions inside the pipe.
It had zero parallels with LLM.
It just brute forced models for a lot of very complex fluid mechanics from an enormous amount of data.
The stuff in the press this week is just consumer AI. The stuff we did in industry is a very different beast. It doesn’t talk. It just understands things.
I want to know more!
Sure, we started installing laser interferometers into the UK’s National Flow Measurement Facility in 2017.
We collected an enormous amount of very price data with an extremely fast sample rate, basically on the excitation of pipe walls as they transport complex multiphase fluids (water, gas, oil, solids), we could determine all kinds of flow regimes (stratified, mist, slugging, etc) and the flow rates of all the phases.
It was extremely difficult and we were well ahead of our time.
I did a deal with the Flow Measurement lab, to get access for collecting a huge amount of data, and also did deals with supermajors to get access to Industrial sites to collect field data. I then did a deal with UK supercomputer centre to do the training.


We ran interferometers in many global locations concurrently, with each one connected to a network of fibre optic cables.
*precise data
Would you agree that LLMs and deep learning more generally, has taken a lot of the wind out of the sails of a lot research in symbolic AI, that I think is actually on the right path towards AI helping us unlock scientific understanding.
Weird, I wrote a long reply here and it’s gone?
Maybe I didn’t submit properly?
The answer is yes!
I also think LLM’s might actually slow human innovation for reasons I will explain again maybe later. Yes I know this sounds stupid and crazy.
Cool. Then I think we’re very much on the same page, because that’s kind of the point I was getting at above. I’ve kind of gotten into this knee-jerk reaction that when people talk about AI doing physics and helping to advance scientific understanding, that what they’re really saying is that LLMs are going to do this. Which I think they’re absolutely not capable of driving advances in those arenas.
100%
LLM’s are weak at the frontiers of human knowledge because there is no training data there. For LLM’s the abyss is abyssal. Entirely void of anything, it doesn’t exist in the training data and so it doesn’t exist.
For LLM’s hypothesising from the realm of training data beyond the frontier of knowledge is usually fatal. LLM’s are limited to human conceptual space, which is a relatively very small space.
I do think LLM’s will allow extremely high competence within the domain of human knowledge at scale.
But I doubt they will be hypothesising new concepts or doing any real innovation sans prior patterns.
The system we built with the interferometry got to the point of requesting specific data (flow conditions) for conditions that it had stubbornly high error rates. Sometimes it would request adjacent data or data where results were seemingly good, but this was to achieve less error across the solution space.
This is a kind of self awareness that has escaped the media and the public. Our AI couldn’t talk but it knew it’s strengths and weaknesses and was curious about areas where it had shortfalls. It demanded specific data to achieve specific further learning objectives.
Chat-GPT doesn’t demand information to self improve, it merely responds to prompts.
There are lots of nuances across all these products and technologies that are totally missed by the public.
Having been through the loop, the main thing to get right is the design of the data schema, and the design of the sensory surface that generates that schema. Any dataset that was not specifically designed for the task of training a specific model is usually a major headwind.
Any future projects I do, I would almost always want to generate thoughtfully designed data, rather than do a project because some legacy dataset merely exists.
Maximising signal is always worth it.
We shouldn't underplay how disruptive LLMs are going to be for a LOT of industries. They do bring a lot of utility to the table, and engineers are going to find lots of insane and innovative ways to apply them.
That said, they are not on track towards being general intelligence in any respect. Which isn't even an indictment of them. Narrow AI applications will on their own, reinvent the human-computer interface. But the nuances of what is involved in replicating something like even comes close to emulating human cognition, interfaces with complexities that we have barely even scratched the surface of understanding in. In neuroscience, philosophy of mind, or in artificial intelligence research.
Yeah LLM’s will change everything because the percentage of people doing innovation or anything beyond prior knowledge is probably <0.1% of people.
99.9% of people are going to be disrupted which in reality means everyone will be disrupted.
I just think LLM’s overall might slow down how fast the frontier advances in science and technology and will instead usher in an age of knitting together much more tightly all the stuff humans collectively know already.
There’s probably a big one time economic boost from knitting together collective knowledge.
Shannon did indeed prove the interlacing of information and thermodynamic but that requires a shit load of time to play out.
The 0 entropy I'm referring to is within human time scales and that a bitcoin will equal a bitcoin today, a year, a hundred years or many many thousands of years into the future without any degradation.
Now, I always objected to how hasty the Austrians were to capture Satoshi as one of their own (we all know Satoshi never once mentioned his/her politics or economic leanings). The Austrian version of hard money is gold which has a definable stock to flow while Bitcoin's S2F goes to infinity.
At first it seems reasonable for the Austrians to assume bitcoin is a harder version of gold, afterall infinity is a bigger number than 62.
But infinite S2F gives birth to a new behaviour that cannot be found in the gold markets.
When a sole owner of bitcoin pursues his selfish desires and adds value to his Bitcoin. Instantaneously, each and every other Bitcoin in existence is also enhanced in value to the same measure. And the spooky thing is this information transmission is faster than the speed of light.
Just as the Austrians assumed Bitcoin's time impervious nature and absolute security of ownership is - in the words of the Austrians - the ultimate validation of Von Mises.
Can we also assume this instant transmission of value from individual to collective is the ultimate validation of Communism?
And that separation of money and state actually leads to the perfect alignment of individual and state
Then if this marriage of Von Mises and Marx is indeed valid, no western individualism can describe this new gestalt, nor can collectivists.
IMO I think the only thought system in the world ready for this new unity is the little know African philosophy of Ubuntu
"I am therefore we are...."
Let me credit the wonderful Mischa Consela for these thoughts. She is a superb mathematician and philosopher from Madeira Portugal. Sadly she was thrown out of Bitcoin meetups and excluded from discussions by the self appointed Bitcoin OG governors that are proselytizing their rigid idea that Bitcoin is exclusively anarcho-capitalist (It appears to be that ancaps are very uncomfortable being told they Pristinated Communists)