> "surprise" factor is in someone's social media content. That is, how easy is it to predict the next thing that they will produce, based upon the past things that they have produced.
LLMs have a perplexity metric for its inputs. Higher perplexity equals higher "surprise" of the input with respect to its trained corpus of text.
Train nostrGPT and look at the average perplexity score/person.
Feasable, but quite impractical without a heafty budget i'd guess.