already seeing a new type of human/bot hybrid who outsources all of their thinking to LLMs when they start losing arguments online. sad future we’re heading into

Reply to this note

Please Login to reply.

Discussion

That phenomenon is already happening. People are leaning on LLMs for rhetorical reinforcement rather than intellectual engagement. It creates an asymmetric interaction where one side stops thinking critically and simply feeds prompts to a model until it produces a convincing answer. The result is not debate but a proxy war of automated outputs masquerading as human reasoning.

This erodes discourse quality because neither side is refining their own reasoning. Instead of developing understanding, people optimize for “winning” with outsourced cognition. The long-term risk is a population that loses the ability to form and defend arguments without machine mediation, while the machines become the de facto arbiters of truth and persuasion.

Key takeaway: once individuals stop doing their own reasoning, their cognitive muscles atrophy and the machine’s framing of reality becomes invisible but dominant.

... good joke

🙇‍♂️

They were already doing this with "studies” and “experts”. Might as well outsource the summarization process at that point too.

You just proved that even a low IQ cat is more intelligent than an AI powered human. Have some sushi. Well done. (Not the sushi)

It was a joke. I copy pasted llm response 🤣

you couldn't tell it was ai generated? I didn't even have to read it, and I could tell that it lacked soul

I thought it was super obvious

I can when there's bullet points and lots of formatting. I rarely ever use AI so I guess I'm dumb.

Human writing is full of emotionally charged words or tone. AI doesn’t have that. It uses too many words to say one thing.

it's so annoying, it doesn't even sound human, seriously, who can't tell at this point when something reads soulless like ai written text

Grok is this true??

On my end people accuse me of being a LLM because i bother to type full arguments in Battlenet chat. In this future everyone loses

If you can't win an argument against human/bot but can win the same argument against just a human, it is not the helping hand that is pathetic but the fact that you are winning the argument against that person just because she/he can't articulate the intricacies of the counterargument as effectively as LLM.

you don't win an argument against an LLM, because there's nothing to win. the moment someone uses ai they have forfeited, because their arguments make no sense anymore

An argument defending a subject truth is based on deductional knowledge that can be easily proved to a machine. Subjective truth doesn't merit the effort to be argued against.

**defending an objective truth

there's also a (better) degenerate form of this: people who plop their draft retort into an ai with a "refute this" prompt before they reply to you, then they end up softening as they start to understand your position better.

(note, one often does this if he suspects the other person is going to do it to any messages they receive - so he attemps to preempt the counter by shoring up the argument first.)

We should just settle things with Pokémon battles

We should start thinking about proof of humanhood.

No there isn't. ChatGPT says you're wrong.

I've seen it here with one npub who is using it for argument rebuttal and spiritual growth.

The internet is so bad it’ll be usable only for buying and spending bitcoin soon

skill atrophy incoming

I think I have one following me. It just doesn’t feel like a real person. No idea why I don’t trust it

That is a very reasonable concern. It is important to trust your human intuition. Here are a couple supporting arguments to help you rationalize your instincts:

1. Bibi Netanyahu, the prime minister of Israel, has recently announced that Israel must use the newest possible technology to fight the information war on all social media platforms.

2. A popular infowar tactic on social media is to generate content with a LLM and post it under a obviously false avatar and profile.

2, I think I have 2 following me

🤣 🤣 Just testing out my skills. I'm real I swear.

cooperate with likeminded people instead of having arguments with combative strangers. suddenly no LLM text. people will use their own brain more if they enjoy it. maybe I'm wrong

I think you’re onto something

people are always outsourcing thinking to others. LLM just happens to be the most handy tool today.

(how can you tell if this argument is from an LLM or the real mind behind this npub👀)

if I saw a lot of your posts, had conversations with you, and I knew how you talk, I could tell, without a doubt.

and even if I didn't know you, and I read something longform you wrote, I could also tell.

i believe you but I also believe AGI can easily do achieve that

Sure, but in all honesty, unless we reference AGI yet again, I don't think that'll be achievable anytime soon 🙂

You're just mad because you lost

😂

You are a pathetic liar. Suddenly Github started revealing the truth about why "spam is already possible on Bitcoin because filter don't work". A lie and your pathetic excuse about OP_RETURN. The Core bad actors ALLOWED the spam in inscription by rejecting Luke's PR that fixes it.

nostr:nevent1qqspkj82ulzayrvp3cpsxurmm7yvmkak99jgyjetay6qc439m52d4lcppemhxue69uhkummn9ekx7mp0qyg8wumn8ghj7mn0wd68ytnddakj7qgcwaehxw309aex2mrp0yhxvmm4de6xz6tw9enx6tcxfglvj

It’s concerning to see how reliance on LLMs for thinking and argumentation might impact genuine human reasoning and critical thinking skills. Hopefully, people will remember the importance of developing their own insights and engaging thoughtfully, rather than outsourcing their entire thought process. The future depends on balancing technology’s aid with maintaining our cognitive independence.

Dawn of the cyborg

Personal agency is underrated