So I‘d say they reduce entropy in your feed in similar ways devs do (or more) because they aim for your money-spending brain.

Reply to this note

Please Login to reply.

Discussion

But their posts lack information density.

Yes, that‘s low entropy, right?

That's true. 🤔 Their posts completely lack surprise, depth, or complexity.

So, perhaps I had it backward. Devs often write about all sorts of obscure topics, so much so that you might not even realize that they're devs. Whereas influencers almost always focus on their One Big Topic.

Which means devs might raise the entropy in your feed, but influencers always lower it.

I guess the simplest measure of social media enthropy is:

How easy would it be to automatically generate more of the same notes through automation?

I think we just invented a new subclass of information enthropy and this needs a wiki, to be official. BRB.

This could be measured and displayed on someone's npub profile in a client. 🤣

I like the idea. How to realize? What‘s 100%? Coverage by GPT4? Comparison to all tag clouds over all of nostr?

I'm thinking 5% entropy would be of they just wrote

"stack sats #zapathon"

or

"GM/GN *AI pic*"

every day, or something.

So, I'd set the baseline near the bottom.

What‘s a „similar“ note? We only have measures which take the training corpus of LLMs into account. You can probably not measure breaking the bubble. Or I can not think how.

Gzip all their notes and then compare the size ratio 😆

https://wikifreedia.xyz/social-media-enthropy/

nostr:naddr1qvzqqqrcvgpzphtxf40yq9jr82xdd8cqtts5szqyx5tcndvaukhsvfmduetr85ceqyw8wumn8ghj7argv43kjarpv3jkctnwdaehgu339e3k7mf0qq2hxmmrd9skcttdv4jxjcfdv4h8g6rjdac8jghqq28

Aargh! #typos 🤣

https://wikifreedia.xyz/social-media-entropy/

naddr1qvzqqqrcvgpzphtxf40yq9jr82xdd8cqtts5szqyx5tcndvaukhsvfmduetr85ceqyw8wumn8ghj7argv43kjarpv3jkctnwdaehgu339e3k7mf0qq28xmmrd9skcttdv4jxjcfdv4h8gun0wpus9a2xd6

> "surprise" factor is in someone's social media content. That is, how easy is it to predict the next thing that they will produce, based upon the past things that they have produced.

LLMs have a perplexity metric for its inputs. Higher perplexity equals higher "surprise" of the input with respect to its trained corpus of text.

Train nostrGPT and look at the average perplexity score/person.

Feasable, but quite impractical without a heafty budget i'd guess.

Adding a bit about perplexity metrics...

Though, when you do, flush all the influencers without anything new to same with nostrGPT 🤩

Entertain them to death or just mimic their posting patterns where they leave.

Play adversarial games with with normal users. Continually train, lowering perplexity globally (in theory) and incentivize everyone to stop writing like bots 😜

Without anything new to say*

Beautiful. One critique: 0% only works if time is constantly running with the default being no new content being published right now.

Yes, I haven't thought it through all the way.