What kind of predictions are you training your model on?

Reply to this note

Please Login to reply.

Discussion

To start it'll be word association. I prototyped a network with a set of mindmap-like files but want to start with nothing but a tokenizer and train it to associate appropriate words.

So it’s going to be predicting words within a particular context window.

You could measure how similar the embeddings for a particular prediction are to the embeddings of the ground truth word. Like an autoencoder.

You would use something like MSE for the loss function.

Allow me to rephrase to see if I understand. The embeddings refer to an internal state (like a vector representing activation levels of neurons) reached after feeding in the words in the context window.

So you're saying I could compare the state which creates a prediction to the state that is achieved by inputting only the predicted word?

Yes, you have an internal representation of the inputs in a continuous semantic space, but you can also have a final layer that outputs an approximation for the embeddings of the predicted word. And you can measure how similar that approximation is to the ground truth.

Alternatively you can also index all the possible outputs and represent them as integers, in which case you would be making descrete predictions. For this you could use sparse categorical cross entropy in your loss function.