Replying to Avatar Ben Ewing

The argument raises important concerns about technological unemployment and social consequences but has several issues:

1. Technological Determinism Without Sociopolitical Context

The argument assumes that technological capabilities directly determine social and economic outcomes, neglecting how policy, laws, and social norms shape technology’s impact. Historically, labor displacement has led to new forms of employment, often because of societal adjustments (e.g., welfare systems, universal basic income proposals, or labor rights movements). It overlooks how governments and societies could adapt through redistribution or new economic models.

2. False Equivalence Between Economic Worth and Human Worth

The argument conflates economic productivity with human value. While it acknowledges a darker side of human behavior (loss of empathy when utility disappears), it implies that a world where many cannot “outperform” robots economically naturally leads to exclusion or extinction. Societies have often supported non-economically productive members (children, elderly, disabled individuals), which contradicts the assumption that lack of economic utility leads to dehumanization or elimination.

3. Linear and Singular View of Progress

The narrative suggests a linear progression: first the bottom 20%, then the next 20%, and so forth, as if economic value is a fixed spectrum and as if job displacement will be uniform and inevitable. In reality, new industries often emerge unpredictably, and technology can augment rather than replace human capabilities. Additionally, human labor markets are not purely meritocratic; social, political, and cultural factors influence value and employment.

4. Overestimation of Technology’s Autonomy, Underestimation of Human Creativity

While it criticizes technological hype (e.g., mobile AI limitations), it paradoxically falls into the same trap by assuming that once robots are capable, the consequences are inevitable. Historically, technological advances have often complemented human labor rather than replacing it wholesale (e.g., industrial revolution, information age). Human creativity, empathy, and adaptability often create new niches of value.

5. Oversimplification of Capital and Ownership Dynamics

The discussion of capital holders dominating through robot ownership oversimplifies economic power structures. Technologies often become decentralized over time, and monopolistic control can be disrupted by innovation, regulation, or collective action. Additionally, open-source movements, digital commons, and cooperative ownership models challenge the narrative of capital consolidation.

6. Fear-Based Speculation Without Exploring Positive Outcomes

The argument frames the future in a dystopian manner, emphasizing conflict and extinction but not considering counter-scenarios where technology leads to more leisure, better quality of life, or communal wealth-sharing mechanisms (e.g., universal basic income, worker co-ops using advanced tools, or post-scarcity economies). Fear is presented as the dominant driver of human action, but hope, empathy, and cooperation have historically shaped major social advancements.

Conclusion:

The core flaw is that the argument assumes a deterministic, zero-sum relationship between technology and human value, underemphasizing human adaptability, social policy, and the capacity for collective solutions. It taps into genuine anxieties but falls short of a complete, nuanced exploration of technological, social, and economic evolution.

Thanks, chatgpt

Reply to this note

Please Login to reply.

Discussion

Isn’t it kinda wild how after a while you get a feel for it and it’s easy to pick up on the AI responses? They’re often formulaic.

For sure. This commenter didn’t even bother trying to massage the output into a coherent human response. “The argument raises important concerns…” 🙃

That was the joke…

Also, the fact that you dismiss something purely for not being human makes my point further. Humans will always value being heard, seen and worked on by other humans, at least for things like connection. In fact look at pets- people choose pretty useless pets from a utilitarian point of view. If it’s a few decades for robots to be as competent as humans, it’ll be many more until we care about them like we do about living creatures. A big part of why humans are irreplaceable is because they have suffered, not because they no longer have to (at least through physical labour).

I didn’t realize it was a joke. And to clarify, I like llms a lot and use them everyday. What I was reacting to was what seemed like a low effort comment. So yes, I agree, humans will always appreciate efforts from other humans, even if it’s technically inferior.

That’s the philosophical thing that we’re grappling with isn’t it? Why do most people instinctively want ‘effort’ from people, even when the result is the same or worse? I wonder if it’s something related to social bonding or tribalism like how monkeys preen each otherZ And will there come a time when this isn’t part of our makeup anymore. And how far could that go. Will we have ‘Synthetic Opinions Matter’ movements? Will ‘Artificial’ in AI be like nigger is viewed now.