That's really interesting, there is probably entirely new field of compression possibilities that haven't been tapped yet.

This (what you describe here) is an extreme case, but I'm sure there are usecases for the same principles at higher rates. Once there is some kind of ubiquitous hw acceleration for crunching AI it will probably be used in most things.

Reply to this note

Please Login to reply.

Discussion

It's not very new, the encoder-decoder architecture has been used for a long time now, so with neural networks you get compressed representation of data as a byproduct quite often.