Global Feed Post Login
Replying to Avatar semisol

An autoencoder is the simplest example. You have two models: an encoder and decoder.

You give the encoder an input, and it produces a vector (the embedding). You then give the decoder the embedding, and expect to get the input.

You then train this model, as if the encoder and decoder were joined, and the embedding another internal layer. Goal: make the output as close as to the input.

At the end, you are left with a model that can turn data into embeddings and embeddings into a lossy version of what it thinks that embedding would correspond to.

Avatar
semisol 9mo ago

The best part is, after it’s done, you do not need the AI. You can then reverse engineer the model into rules by analyzing inputs and outputs, and its state.

Reply to this note

Please Login to reply.

Discussion

No replies yet.