Computing uses logic gates at the base level so in that sense I’d agree. But there are several layers of abstraction between that level and how neural networks work. That’s what I meant by the soup boiled so far down that it’s just plasma now, and no longer really soup.

Why do I think the distinction is important: before neural networks, we had just normal code. Code is structured in a way where you interact with Boolean logic directly (even if the actual logical operations they represent are actually happening multiple extraction layers down).

So this code was much more predictable and MUCH more understandable than how AI works. For example, with normal code, if you write that if some variable is true then something should happen, then that thing will always happen if the variable is true. With AI, you could tell it that if something happens, it should do that other thing, but it won’t always actually do that other thing, because maybe there are other parameters and neurons that get involved with processing your request with how you worded the request that cause it to output unexpected stuff.

So while yes, AI uses boolean operations way way down at the base level, I’d say that talking about Boolean operations in the context of AI, give people the false impression that it will behave predictably or that its logic will make sense.

To add to the soup example, it’s similar imo to saying people at their core are physics. In one way, yes that’s true, but maybe thinking of people as physics confuses more than it helps cuz we do weird things like write very long responses to simple questions 😜🫂

Reply to this note

Please Login to reply.

Discussion

Bahahaha! It’s okay, the long answer is very appreciated

Cheers 🍻

I think it’d be safe to say that at a very broad level, AI and ML “train” and make models by recognizing (and remembering) patterns in the Boolean outcomes of specific conditions. They train by representing “cause-and-effect”, much like humans do. Neural networks are necessary for this representation as simple Boolean logic won’t cut it

And they use algorithms and statistics to recognize and represent the patterns

If AI is “they” then I’d say that the knowledge in the neural network is not crystallized nearly enough to call them algorithms or statistics. They just do what they feel like. That’s one of the reasons they are only juuuust getting good enough to do simple math. Think how long we’ve had calculators. These are so bad at algorithms and statistics that with probably more than 1M times as much compute they still can’t compete with a calculator.

“Feel” of course is anthropomorphizing, but it’s not far off in terms of results

I think neural networks just do “representation” and not actual computing

Thanks much! Have bugged you enough 😂

🫂🫂🫂