you cannot define observer as biological brain. that's absurd. like saying complex structure made by physical computations is required to define what physics is.
Thoughts on "Observer Theory" [1]:
An ant doesn't have the capacity to fully observe, let alone comprehend the purpose of, the microscope pointed at it.
The concept of a microscope or an automobile is incomprehensible to something like a bacterium even when the bacterium is in physical contact with it.
There's a fundamental limit to what any creature is capable of observing (including humans and powerful #AI agents). There are very likely physical processes happening all around us that we, despite all of our technology, have no hope of understanding because we're computationally bounded just as the bacterium is.
Wolfram's writings on "Observer Theory" are fascinating for so many reasons. For example, he argues that the fundamental laws of #physics that we base our civilization on is dependent on us being observers like we are.
It's worth rephrasing: in his model of physics we didn't evolve in a universe that had these specific laws but rather because we're computationally bounded observers we perceive the universe to have specific laws.
That's not to say the laws we've experienced aren't real. They are! It's just that if we were far more capable we might not perceive things like #quantum superposition as confusing.
Think about the ideal quantum computer: it experiences all branches of the wave function concurrently and can make sense of it enough to perform useful calculations. It can navigate those branches naturally.
What happens when we augment our senses with new sensors and supercomputers (e.g. brain-computer interfaces or even just AR goggles connected to powerful AI chips)? What will we learn?
1. https://writings.stephenwolfram.com/2023/12/observer-theory/
Discussion
Agreed! That’s why I also mentioned AI agents as being computationally bound as well. Any process capable of “observing” or modeling another process has limits and those limits shape the “rules of nature” that it can perceive.
You can see this in action if you train a small neural net for example. If you take a 1-hidden-layer neural network and try to fit it to the movement of the planets around the sun you won’t get the loss below some really high threshold. It can’t accurately learn that behavior. However you can take an LLM and teach it the equations of motion and how to use a calculator and it can predict the movements just fine.