Computing uses logic gates at the base level so in that sense I’d agree. But there are several layers of abstraction between that level and how neural networks work. That’s what I meant by the soup boiled so far down that it’s just plasma now, and no longer really soup.
Why do I think the distinction is important: before neural networks, we had just normal code. Code is structured in a way where you interact with Boolean logic directly (even if the actual logical operations they represent are actually happening multiple extraction layers down).
So this code was much more predictable and MUCH more understandable than how AI works. For example, with normal code, if you write that if some variable is true then something should happen, then that thing will always happen if the variable is true. With AI, you could tell it that if something happens, it should do that other thing, but it won’t always actually do that other thing, because maybe there are other parameters and neurons that get involved with processing your request with how you worded the request that cause it to output unexpected stuff.
So while yes, AI uses boolean operations way way down at the base level, I’d say that talking about Boolean operations in the context of AI, give people the false impression that it will behave predictably or that its logic will make sense.
To add to the soup example, it’s similar imo to saying people at their core are physics. In one way, yes that’s true, but maybe thinking of people as physics confuses more than it helps cuz we do weird things like write very long responses to simple questions 😜🫂