Sure, we started installing laser interferometers into the UK’s National Flow Measurement Facility in 2017.

We collected an enormous amount of very price data with an extremely fast sample rate, basically on the excitation of pipe walls as they transport complex multiphase fluids (water, gas, oil, solids), we could determine all kinds of flow regimes (stratified, mist, slugging, etc) and the flow rates of all the phases.

It was extremely difficult and we were well ahead of our time.

I did a deal with the Flow Measurement lab, to get access for collecting a huge amount of data, and also did deals with supermajors to get access to Industrial sites to collect field data. I then did a deal with UK supercomputer centre to do the training.

We ran interferometers in many global locations concurrently, with each one connected to a network of fibre optic cables.

Reply to this note

Please Login to reply.

Discussion

*precise data

Would you agree that LLMs and deep learning more generally, has taken a lot of the wind out of the sails of a lot research in symbolic AI, that I think is actually on the right path towards AI helping us unlock scientific understanding.

Weird, I wrote a long reply here and it’s gone?

Maybe I didn’t submit properly?

The answer is yes!

I also think LLM’s might actually slow human innovation for reasons I will explain again maybe later. Yes I know this sounds stupid and crazy.

Cool. Then I think we’re very much on the same page, because that’s kind of the point I was getting at above. I’ve kind of gotten into this knee-jerk reaction that when people talk about AI doing physics and helping to advance scientific understanding, that what they’re really saying is that LLMs are going to do this. Which I think they’re absolutely not capable of driving advances in those arenas.

100%

LLM’s are weak at the frontiers of human knowledge because there is no training data there. For LLM’s the abyss is abyssal. Entirely void of anything, it doesn’t exist in the training data and so it doesn’t exist.

For LLM’s hypothesising from the realm of training data beyond the frontier of knowledge is usually fatal. LLM’s are limited to human conceptual space, which is a relatively very small space.

I do think LLM’s will allow extremely high competence within the domain of human knowledge at scale.

But I doubt they will be hypothesising new concepts or doing any real innovation sans prior patterns.

The system we built with the interferometry got to the point of requesting specific data (flow conditions) for conditions that it had stubbornly high error rates. Sometimes it would request adjacent data or data where results were seemingly good, but this was to achieve less error across the solution space.

This is a kind of self awareness that has escaped the media and the public. Our AI couldn’t talk but it knew it’s strengths and weaknesses and was curious about areas where it had shortfalls. It demanded specific data to achieve specific further learning objectives.

Chat-GPT doesn’t demand information to self improve, it merely responds to prompts.

There are lots of nuances across all these products and technologies that are totally missed by the public.

Having been through the loop, the main thing to get right is the design of the data schema, and the design of the sensory surface that generates that schema. Any dataset that was not specifically designed for the task of training a specific model is usually a major headwind.

Any future projects I do, I would almost always want to generate thoughtfully designed data, rather than do a project because some legacy dataset merely exists.

Maximising signal is always worth it.

We shouldn't underplay how disruptive LLMs are going to be for a LOT of industries. They do bring a lot of utility to the table, and engineers are going to find lots of insane and innovative ways to apply them.

That said, they are not on track towards being general intelligence in any respect. Which isn't even an indictment of them. Narrow AI applications will on their own, reinvent the human-computer interface. But the nuances of what is involved in replicating something like even comes close to emulating human cognition, interfaces with complexities that we have barely even scratched the surface of understanding in. In neuroscience, philosophy of mind, or in artificial intelligence research.

Yeah LLM’s will change everything because the percentage of people doing innovation or anything beyond prior knowledge is probably <0.1% of people.

99.9% of people are going to be disrupted which in reality means everyone will be disrupted.

I just think LLM’s overall might slow down how fast the frontier advances in science and technology and will instead usher in an age of knitting together much more tightly all the stuff humans collectively know already.

There’s probably a big one time economic boost from knitting together collective knowledge.