Dang. How is that bullish?
š«š«š«
āFeelā of course is anthropomorphizing, but itās not far off in terms of results
If AI is ātheyā then Iād say that the knowledge in the neural network is not crystallized nearly enough to call them algorithms or statistics. They just do what they feel like. Thatās one of the reasons they are only juuuust getting good enough to do simple math. Think how long weāve had calculators. These are so bad at algorithms and statistics that with probably more than 1M times as much compute they still canāt compete with a calculator.
Computing uses logic gates at the base level so in that sense Iād agree. But there are several layers of abstraction between that level and how neural networks work. Thatās what I meant by the soup boiled so far down that itās just plasma now, and no longer really soup.
Why do I think the distinction is important: before neural networks, we had just normal code. Code is structured in a way where you interact with Boolean logic directly (even if the actual logical operations they represent are actually happening multiple extraction layers down).
So this code was much more predictable and MUCH more understandable than how AI works. For example, with normal code, if you write that if some variable is true then something should happen, then that thing will always happen if the variable is true. With AI, you could tell it that if something happens, it should do that other thing, but it wonāt always actually do that other thing, because maybe there are other parameters and neurons that get involved with processing your request with how you worded the request that cause it to output unexpected stuff.
So while yes, AI uses boolean operations way way down at the base level, Iād say that talking about Boolean operations in the context of AI, give people the false impression that it will behave predictably or that its logic will make sense.
To add to the soup example, itās similar imo to saying people at their core are physics. In one way, yes thatās true, but maybe thinking of people as physics confuses more than it helps cuz we do weird things like write very long responses to simple questions šš«
Fun story here: some people working at understanding how neural networks even work discovered that they could modify the parameters of a neural network such that it āforgotā where the Eiffel Tower was! Neural network knowledge and decision making processes resides in the soup (pun intended š) of those parameters in a way thatās very difficult to inspect, but it may someday be doable more broadly than just this Eiffel Tower example. And we better hope itās possible. We need to somehow ensure these things are safe!
I would argue that boiling it down to Boolean logic is too far, like itās a soup that has been heated to a plasma state. Is it even a soup anymore?
Or use lightning š
nostr:note15r6zqj434ar8j8dnm5du0yu42et7g86e7hjazh5dkk8ugtm5kkjsggv69h
Unless itās because they refuse to be identified by the facial recognition scanners š
This hits hard š
Should have said āfalse nowadaysā since Iād have argued it was true about life letās say three thousand years ago.
I think this is false. It is only by reducing their desire for more that plenty can accrue⦠while maybe the difference is in what we mean by āplentyā, Iād argue that what people think of when they say it is usually the kind Iām talking about.







