https://wikifreedia.xyz/social-media-enthropy/

nostr:naddr1qvzqqqrcvgpzphtxf40yq9jr82xdd8cqtts5szqyx5tcndvaukhsvfmduetr85ceqyw8wumn8ghj7argv43kjarpv3jkctnwdaehgu339e3k7mf0qq2hxmmrd9skcttdv4jxjcfdv4h8g6rjdac8jghqq28

Reply to this note

Please Login to reply.

Discussion

Aargh! #typos 🤣

https://wikifreedia.xyz/social-media-entropy/

naddr1qvzqqqrcvgpzphtxf40yq9jr82xdd8cqtts5szqyx5tcndvaukhsvfmduetr85ceqyw8wumn8ghj7argv43kjarpv3jkctnwdaehgu339e3k7mf0qq28xmmrd9skcttdv4jxjcfdv4h8gun0wpus9a2xd6

> "surprise" factor is in someone's social media content. That is, how easy is it to predict the next thing that they will produce, based upon the past things that they have produced.

LLMs have a perplexity metric for its inputs. Higher perplexity equals higher "surprise" of the input with respect to its trained corpus of text.

Train nostrGPT and look at the average perplexity score/person.

Feasable, but quite impractical without a heafty budget i'd guess.

Adding a bit about perplexity metrics...

Though, when you do, flush all the influencers without anything new to same with nostrGPT 🤩

Entertain them to death or just mimic their posting patterns where they leave.

Play adversarial games with with normal users. Continually train, lowering perplexity globally (in theory) and incentivize everyone to stop writing like bots 😜

Without anything new to say*

Beautiful. One critique: 0% only works if time is constantly running with the default being no new content being published right now.

Yes, I haven't thought it through all the way.