Good point! I was trying to explain the basics of pattern recognition and representation
Discussion
Fun story here: some people working at understanding how neural networks even work discovered that they could modify the parameters of a neural network such that it “forgot” where the Eiffel Tower was! Neural network knowledge and decision making processes resides in the soup (pun intended 😏) of those parameters in a way that’s very difficult to inspect, but it may someday be doable more broadly than just this Eiffel Tower example. And we better hope it’s possible. We need to somehow ensure these things are safe!
Cool story 👍
But by and large would you agree that computing technology boils down to the simple idea that “something is true when some conditions are met electronically”? Those conditions could be parameters
Computing uses logic gates at the base level so in that sense I’d agree. But there are several layers of abstraction between that level and how neural networks work. That’s what I meant by the soup boiled so far down that it’s just plasma now, and no longer really soup.
Why do I think the distinction is important: before neural networks, we had just normal code. Code is structured in a way where you interact with Boolean logic directly (even if the actual logical operations they represent are actually happening multiple extraction layers down).
So this code was much more predictable and MUCH more understandable than how AI works. For example, with normal code, if you write that if some variable is true then something should happen, then that thing will always happen if the variable is true. With AI, you could tell it that if something happens, it should do that other thing, but it won’t always actually do that other thing, because maybe there are other parameters and neurons that get involved with processing your request with how you worded the request that cause it to output unexpected stuff.
So while yes, AI uses boolean operations way way down at the base level, I’d say that talking about Boolean operations in the context of AI, give people the false impression that it will behave predictably or that its logic will make sense.
To add to the soup example, it’s similar imo to saying people at their core are physics. In one way, yes that’s true, but maybe thinking of people as physics confuses more than it helps cuz we do weird things like write very long responses to simple questions 😜🫂
Bahahaha! It’s okay, the long answer is very appreciated
Cheers 🍻
I think it’d be safe to say that at a very broad level, AI and ML “train” and make models by recognizing (and remembering) patterns in the Boolean outcomes of specific conditions. They train by representing “cause-and-effect”, much like humans do. Neural networks are necessary for this representation as simple Boolean logic won’t cut it 
And they use algorithms and statistics to recognize and represent the patterns
If AI is “they” then I’d say that the knowledge in the neural network is not crystallized nearly enough to call them algorithms or statistics. They just do what they feel like. That’s one of the reasons they are only juuuust getting good enough to do simple math. Think how long we’ve had calculators. These are so bad at algorithms and statistics that with probably more than 1M times as much compute they still can’t compete with a calculator.