An autoencoder is the simplest example. You have two models: an encoder and decoder.
You give the encoder an input, and it produces a vector (the embedding). You then give the decoder the embedding, and expect to get the input.
You then train this model, as if the encoder and decoder were joined, and the embedding another internal layer. Goal: make the output as close as to the input.
At the end, you are left with a model that can turn data into embeddings and embeddings into a lossy version of what it thinks that embedding would correspond to.