Global Feed Post Login
Replying to Avatar liminal 🦠

> "surprise" factor is in someone's social media content. That is, how easy is it to predict the next thing that they will produce, based upon the past things that they have produced.

LLMs have a perplexity metric for its inputs. Higher perplexity equals higher "surprise" of the input with respect to its trained corpus of text.

Train nostrGPT and look at the average perplexity score/person.

Feasable, but quite impractical without a heafty budget i'd guess.

Avatar
Laeserin 1y ago

Adding a bit about perplexity metrics...

Reply to this note

Please Login to reply.

Discussion

No replies yet.