Avatar
mark tyler
9baed03137d214b3e833059a93eb71cf4e5c6b3225ff7cd1057595f606088434
Bitcoin & šŸ«‚ Oh and dimly trying to think through interesting issues. I think that I don’t have a right to force you to do anything other than not harm me or others. Seems like most people I interact with in the real world disagree with this statement. To be fair.. the devil is in definition of ā€œharmā€.

If AI is ā€œtheyā€ then I’d say that the knowledge in the neural network is not crystallized nearly enough to call them algorithms or statistics. They just do what they feel like. That’s one of the reasons they are only juuuust getting good enough to do simple math. Think how long we’ve had calculators. These are so bad at algorithms and statistics that with probably more than 1M times as much compute they still can’t compete with a calculator.

Computing uses logic gates at the base level so in that sense I’d agree. But there are several layers of abstraction between that level and how neural networks work. That’s what I meant by the soup boiled so far down that it’s just plasma now, and no longer really soup.

Why do I think the distinction is important: before neural networks, we had just normal code. Code is structured in a way where you interact with Boolean logic directly (even if the actual logical operations they represent are actually happening multiple extraction layers down).

So this code was much more predictable and MUCH more understandable than how AI works. For example, with normal code, if you write that if some variable is true then something should happen, then that thing will always happen if the variable is true. With AI, you could tell it that if something happens, it should do that other thing, but it won’t always actually do that other thing, because maybe there are other parameters and neurons that get involved with processing your request with how you worded the request that cause it to output unexpected stuff.

So while yes, AI uses boolean operations way way down at the base level, I’d say that talking about Boolean operations in the context of AI, give people the false impression that it will behave predictably or that its logic will make sense.

To add to the soup example, it’s similar imo to saying people at their core are physics. In one way, yes that’s true, but maybe thinking of people as physics confuses more than it helps cuz we do weird things like write very long responses to simple questions šŸ˜œšŸ«‚

Fun story here: some people working at understanding how neural networks even work discovered that they could modify the parameters of a neural network such that it ā€œforgotā€ where the Eiffel Tower was! Neural network knowledge and decision making processes resides in the soup (pun intended šŸ˜) of those parameters in a way that’s very difficult to inspect, but it may someday be doable more broadly than just this Eiffel Tower example. And we better hope it’s possible. We need to somehow ensure these things are safe!

I would argue that boiling it down to Boolean logic is too far, like it’s a soup that has been heated to a plasma state. Is it even a soup anymore?

nostr:note15r6zqj434ar8j8dnm5du0yu42et7g86e7hjazh5dkk8ugtm5kkjsggv69h

Unless it’s because they refuse to be identified by the facial recognition scanners šŸ˜…

Oh wait you did get that one šŸ˜…

I think this is false. It is only by reducing their desire for more that plenty can accrue… while maybe the difference is in what we mean by ā€œplentyā€, I’d argue that what people think of when they say it is usually the kind I’m talking about.