I just asked Google "Who authored "Stay humble. Stack Sats." ? The AI response 👇

The phrase "stay humble. stack sats." was penned by Michael Saylor, co-founder of MicroStrategy. The phrase appeared on April 27, 2025, after his firm had invested $555 million in Bitcoin. It is a call to action for Bitcoin investors, encouraging them to accumulate Bitcoin satoshis (sats), which are the smallest unit of Bitcoin.

👀🤣🤣

nostr:nprofile1qqsqfjg4mth7uwp307nng3z2em3ep2pxnljczzezg8j7dhf58ha7ejgprpmhxue69uhhqun9d45h2mfwwpexjmtpdshxuet5qyt8wumn8ghj7un9d3shjtnswf5k6ctv9ehx2aqpr9mhxue69uhhxetwv35hgtnwdaekvmrpwfjjucm0d5klqft7 comments?

Reply to this note

Please Login to reply.

Discussion

stack sats." AI to The of (sats), 👇

The Michael the a is It after are had on for investors, just comments? of satoshis asked $555 phrase Saylor, Stack action them humble. "stay Bitcoin Bitcoin his encouraging 2025, was invested unit to ? in call I penned phrase The Sats." Bitcoin. which authored April MicroStrategy. firm "Who accumulate 27, Google smallest by appeared million "Stay

👀🤣🤣

nostr:nprofile1qqsqfjg4mth7uwp307nng3z2em3ep2pxnljczzezg8j7dhf58ha7ejgprpmhxue69uhhqun9d45h2mfwwpexjmtpdshxuet5qyt8wumn8ghj7un9d3shjtnswf5k6ctv9ehx2aqpr9mhxue69uhhxetwv35hgtnwdaekvmrpwfjjucm0d5klqft7 response humble. Bitcoin. co-founder

Saylor...lmao 😂

But ppl are worried about Google's quantum computers? 🤣

Disrespectful 😂

Your Google is broke.

Google is always broke.

Interesting that the AI overview, which is powered by Gemini, says this. I assume then it's an AI overview of search results?

🤷

Lmao it was actually P Diddy back in 2017 after a convo with Satoshi and Scottie Pippen

Consider the source. We all know that was nostr:npub1qny3tkh0acurzla8x3zy4nhrjz5zd8l9sy9jys09umwng00manysew95gx SHSS🤙

Does AI stand for Artificial Ignorance? Sheesh!

Rekt 😂

It’s amazing when you know the AI answer is wrong but it states it like it’s a simple fact. In my experience, if I then tell the model its answer is wrong, it will agree with me that it’s wrong and proceed to give another answer that’s just as wrong as the first. But people who don’t know will just trust the response.

This is why AI is completely useless.

So far, the best use case I’ve had is where I give it some text (like an article or something) and prompt it to summarize or write “resume bullet points” about it. Things where I like the wording it suggests but I know the content well enough to spot inaccuracy. I know it’s improving, but still. The best description if heard of using LLMs is to treat it like a very eager but inexperienced research assistant. It’s more than willing to go find info and do what it thinks you want, but it doesn’t yet know if what it’s giving you really makes sense (or if it’s giving your assignments to someone else…).

great advice

Legend 🫂🫡

nostr:npub1nvenfdndqszkyycc8vhrvlrd067xqjt0pf03u2uwllazt0dqnw0q66z58r

for example👆

😂

nostr:nprofile1qqs9xtvrphl7p8qnua0gk9zusft33lqjkqqr7cwkr6g8wusu0lle8jcpp4mhxue69uhkummn9ekx7mqpr4mhxue69uhkummnw3ezucnfw33k76twv4ezuum0vd5kzmp0aet9lm doesn’t let me drill down to see this note 🧐

Found it 👍