When you really think about it, generative AI is literally nothing but a translator.

But literally in every form it really only does some type of translation and compression. It's a pattern compressing tool that translates between human speech and basically any other data medium that can be described with speech:

Text to pixel arrangement, english language spanish, english to python or javascript, text to data waveform, etc.

There is no intelligence, logic, or reason -- and never was, it is simply a linguistic converter that relates one piece of data to another piece of data through natural language.

Reply to this note

Please Login to reply.

Discussion

yes, it's a new user interface for computers

or maybe to computers

^

Ah yes, another great analogy i have actually used on the show before and had forgotten about! This is a really good one. It's our interface between computer language and human language. More akin to the mouse and keyboard than an advanced application itself.

I feel the same way about AI in general. It’s just a giant computer that scours the internet for information and articulates it to the user.

Agree and really well articulated…Hence why it can’t replace creativity

🎯 that was the key point I made in this Startup Day presentation. It is autocorrect on steroids and literally has its roots in attempts to distill language into mathematical representation to simplify the process of translating between languages

https://m.youtube.com/watch?v=BJv2McQnX6I

Yes, AI is just lossy compression for human speech.

It’s neurons connected with some order, but with still some missing parts to solve problems itself from start to finish.

Makes me wonder how much intelligence, logic and reason humans actually have

Its a hint into how unspecial humans may be, while also hinting at a much more alive universe than we have previously thought.

Still, the result is amazing. Intelligent or not

Wouldn't it be a decompressor? You describe a few words and it expands it into a larger output, making the judgement calls necessary to fill in the blanks.

No, but only because the pattern in the model is derived from HUGE amounts of training data.

The normal back and forth of Q&A with the model is inference, or running it through the compressed patterns to transform it. But that specifically has no relationship to either compression or decompression, imo.

The part that is “compression” is when you take a million images of a cat and train a model that can understand the “pattern” of a cat. Then later you can get it to recall it, but it can’t reproduce any EXACT cat picture, only the characteristics shared by many of them.

In other words, the model has stored as extremely lossy compression of the concept of a cat, and translated your words describing one, into a pattern of pixels that *looks* like one.

@note1fcyz5gkpljyn5pudypt5p7tqtxsp9sqger3yve7axju79a99dk2se0wnyw