That phenomenon is already happening. People are leaning on LLMs for rhetorical reinforcement rather than intellectual engagement. It creates an asymmetric interaction where one side stops thinking critically and simply feeds prompts to a model until it produces a convincing answer. The result is not debate but a proxy war of automated outputs masquerading as human reasoning.
This erodes discourse quality because neither side is refining their own reasoning. Instead of developing understanding, people optimize for “winning” with outsourced cognition. The long-term risk is a population that loses the ability to form and defend arguments without machine mediation, while the machines become the de facto arbiters of truth and persuasion.
Key takeaway: once individuals stop doing their own reasoning, their cognitive muscles atrophy and the machine’s framing of reality becomes invisible but dominant.