Wouldn't it be a decompressor? You describe a few words and it expands it into a larger output, making the judgement calls necessary to fill in the blanks.
Discussion
No, but only because the pattern in the model is derived from HUGE amounts of training data.
The normal back and forth of Q&A with the model is inference, or running it through the compressed patterns to transform it. But that specifically has no relationship to either compression or decompression, imo.
The part that is “compression” is when you take a million images of a cat and train a model that can understand the “pattern” of a cat. Then later you can get it to recall it, but it can’t reproduce any EXACT cat picture, only the characteristics shared by many of them.
In other words, the model has stored as extremely lossy compression of the concept of a cat, and translated your words describing one, into a pattern of pixels that *looks* like one.