An interesting way to think of AI:

It's just an insanely good compression algorithm. However, rather than being able to extract all of the images and text it is trained on word for word like we normally do with compressed files, it stores a highly generalized relationship between one set of words and another, and one set of pixels and another. So you can't pull an exact image like you can from a zip file, but you can pull the *idea* of a cat, the *style* of Wes Anderson, and the common patterns of a landscape. It's a compression of ideas and context as a collective, rather than the compression of the exact group of individual images that can be later recalled.

#AI_Unchained

Reply to this note

Please Login to reply.

Discussion

I really like this approach to it. Love it actually. Like a human idea abstractor and compressor. Yes.

I do tend to think of it this way and it gives me a disdain for AI generated text. The compressed information is the prompts themselves, the rest is not creative.

Case in point. I want AI to compress content for me, but all this bot did was add more words to my statement without adding meaning.

Summarizing books is a use case I'd find interesting.

Wow Cyborg bot doesn't know how to spell "lightning". Bad bot.

Just found the new podcast and super excited for this nostr:npub1h8nk2346qezka5cpm8jjh3yl5j88pf4ly2ptu7s6uu55wcfqy0wq36rpev!

Words are just lossy compression, as is understanding.

That's why there is no difference between maximum signal, perfect encryption, and randomness.

What appears random from an outside perspective might as well be maximally informative if you have the key to decrypt it (aka make sense of it, aka understand it).

That's why any proper encryption algorithm has to produce a random string of bits, otherwise you could find a pattern and attack at. Same for compression (otherwise you could compress it further).

Words are hashes, and hashes are inherently lossy (number of possible outputs = smaller than number of all possible inputs). Consequently, making sense of lossy artifacts (decompressing them) requires "filling in the gaps", aka intelligence and creativity.

The question is how to go from that to wisdom.

Very heavy-duty .

The difference between compression, encryption and random generation is planning.

The difference between ( compression, encryption and random generation (work)) is skill.

The wisdom is knowing that by imitation the goal is set for reproduction but then the wisdom is that reproduction may not be enough and there needs to be initiation and further investigation.

When I was a toddler I remember having dreams/nightmares of random chaos and then waking up to clarity. I remember laughing when adults laughed even though I didn't understand the joke. All of my activities were honed or learned by experience.

Or were they.

Maybe my skilful actions were unlocked?

Planning is just a meta-model, a compressed view of the world, the past, the present, and possible futures.

That's why the layered approach to intelligence works. It's the same stuff all the way up and down, just different levels of analysis.

Words are a very crappy way of compressing information, that is why humans invented math and dynamical systems theory. That way you can compress a practically infinite dimensional data set into a finite set of parameters of a corresponding model of a dynamical system. Fundamental science is about finding better way to desribe nature, balancing accuracy and complexity hence improving on compression.

Lossy but in the right places

💯

This is very interesting. The generalized relationships are like keyframes and the prompting guides the tweening.

Are you putting AI Unchained on the pod catchers? Not seeing it on overcast

Distribution isn’t really good yet but it will be available in basically all the major podcast apps when I get some time to devote to it

Compression, prediction, classification, are all just facets of the same underlying problem.

And none of that is anything even remotely related to what we usually mean by “intelligence” or what we used to mean by that at least.

Intelligence is about understanding the real world. Not manipulating abstract symbols separated from their essential meaning.

Even if true AI is possible — it would require far more than parsing billions of texts to achieve.

The problem is very similar to the Oracle problem in Blockchain.

If your AI simply assumes certain training set to represent reality — then the actual intelligence is entirely in the selection of that training set.

To be precise we should rename AI as “Augmented Intelligence”.

And the intelligence that is being augmented isn’t the user’s — but that of the people who train and control the algorithm— while the user’s intelligence — their ability to comprehend reality — is in fact being diminished by outsourcing it to external, centrally controlled oracles.