Replying to Avatar Lyn Alden

When it comes to AI, philosophical people often ask "What will happen to people if they lack work? Will they find it hard to find meaning in such a world of abundance?"

But there is a darker side to the question, which people intuit more than they say aloud.

In all prior technological history, new technologies changed the nature of human work but did not displace the need for human work. The fearful rightly ask: what happens if we make robots, utterly servile, that can outperform the majority of humans at most tasks with lower costs? Suppose they displace 70% or 80% of human labor to such an extent that 70% or 80% of humans cannot find another type of economic work relative to those bots.

Now, the way I see it, it's a lot harder to replace humans than most expect. Datacenter AI is not the same as mobile AI; it takes a couple more decades of Moore's law to put a datacenter supercomputer into a low-energy local robot, or it would otherwise rely on a sketchy and limited-bandwidth connection to a datacenter. And it takes extensive physical design and programming which is harder than VC bros tend to suppose. And humans are self-repairing for the most part, which is a rather fantastic trait for a robot. A human cell outcompetes all current human technology in terms of complexity. People massively over-index what robots are capable of within a given timeframe, in my view. We're nowhere near human-level robots for all tasks, even as we're close to them for some tasks.

But, the concept is close enough to be on our radar. We can envision it in a lifetime rather than in fantasy or far-off science fiction.

So back to my prior point, the darker side of the question is to ask how humans will treat other humans if they don't need them for anything. All of our empathetic instincts were developed in a world where we needed each other; needed our tribe. And the difference between the 20% most capable and 20% least capable in a tribe wasn't that huge.

But imagine our technology makes the bottom 20% economic contributes irrelevant. And then the next 20%. And then the next 20%, slowly moving up the spectrum.

What people fear, often subconsciously rather than being able to articulate the full idea, is that humanity will reach a point where robots can replace many people in any economic sense; they can do nothing that economicall outcomes a bot and earns an income other than through charity.

And specifically, they wonder what happens at the phase when this happens regarding those who own capital vs those that rely on their labor within their lifetimes. Scarce capital remains valuable for a period of time, so long as it can be held legally or otherwise, while labor becomes demonetized within that period. And as time progresses, weak holders of capital who spend more than they consume, also diminish due to lack of labor, and many imperfect forms of capital diminish. It might even be the case that those who own the robots are themselves insufficient, but at least they might own the codes that control them.

Thus, people ultimately fear extinction, or being collected into non-economic open-air prisons and given diminishing scraps, resulting in a slow extinction. And they fear it not from the robots themselves, but from the minority of humans who wield the robots.

The argument raises important concerns about technological unemployment and social consequences but has several issues:

1. Technological Determinism Without Sociopolitical Context

The argument assumes that technological capabilities directly determine social and economic outcomes, neglecting how policy, laws, and social norms shape technology’s impact. Historically, labor displacement has led to new forms of employment, often because of societal adjustments (e.g., welfare systems, universal basic income proposals, or labor rights movements). It overlooks how governments and societies could adapt through redistribution or new economic models.

2. False Equivalence Between Economic Worth and Human Worth

The argument conflates economic productivity with human value. While it acknowledges a darker side of human behavior (loss of empathy when utility disappears), it implies that a world where many cannot ā€œoutperformā€ robots economically naturally leads to exclusion or extinction. Societies have often supported non-economically productive members (children, elderly, disabled individuals), which contradicts the assumption that lack of economic utility leads to dehumanization or elimination.

3. Linear and Singular View of Progress

The narrative suggests a linear progression: first the bottom 20%, then the next 20%, and so forth, as if economic value is a fixed spectrum and as if job displacement will be uniform and inevitable. In reality, new industries often emerge unpredictably, and technology can augment rather than replace human capabilities. Additionally, human labor markets are not purely meritocratic; social, political, and cultural factors influence value and employment.

4. Overestimation of Technology’s Autonomy, Underestimation of Human Creativity

While it criticizes technological hype (e.g., mobile AI limitations), it paradoxically falls into the same trap by assuming that once robots are capable, the consequences are inevitable. Historically, technological advances have often complemented human labor rather than replacing it wholesale (e.g., industrial revolution, information age). Human creativity, empathy, and adaptability often create new niches of value.

5. Oversimplification of Capital and Ownership Dynamics

The discussion of capital holders dominating through robot ownership oversimplifies economic power structures. Technologies often become decentralized over time, and monopolistic control can be disrupted by innovation, regulation, or collective action. Additionally, open-source movements, digital commons, and cooperative ownership models challenge the narrative of capital consolidation.

6. Fear-Based Speculation Without Exploring Positive Outcomes

The argument frames the future in a dystopian manner, emphasizing conflict and extinction but not considering counter-scenarios where technology leads to more leisure, better quality of life, or communal wealth-sharing mechanisms (e.g., universal basic income, worker co-ops using advanced tools, or post-scarcity economies). Fear is presented as the dominant driver of human action, but hope, empathy, and cooperation have historically shaped major social advancements.

Conclusion:

The core flaw is that the argument assumes a deterministic, zero-sum relationship between technology and human value, underemphasizing human adaptability, social policy, and the capacity for collective solutions. It taps into genuine anxieties but falls short of a complete, nuanced exploration of technological, social, and economic evolution.

Reply to this note

Please Login to reply.

Discussion

Ok so these are good points but this reads like an answer from ChatGPT lol

That was the joke (although yeah there are good points too). So cool (and ironic) how she’s presenting these people so afraid of being replaced and then when I use AI to create an answer, even when it’s good, people don’t like it. Hope someone appreciates that!

Thanks, chatgpt

Isn’t it kinda wild how after a while you get a feel for it and it’s easy to pick up on the AI responses? They’re often formulaic.

For sure. This commenter didn’t even bother trying to massage the output into a coherent human response. ā€œThe argument raises important concernsā€¦ā€ šŸ™ƒ

That was the joke…

Also, the fact that you dismiss something purely for not being human makes my point further. Humans will always value being heard, seen and worked on by other humans, at least for things like connection. In fact look at pets- people choose pretty useless pets from a utilitarian point of view. If it’s a few decades for robots to be as competent as humans, it’ll be many more until we care about them like we do about living creatures. A big part of why humans are irreplaceable is because they have suffered, not because they no longer have to (at least through physical labour).

I didn’t realize it was a joke. And to clarify, I like llms a lot and use them everyday. What I was reacting to was what seemed like a low effort comment. So yes, I agree, humans will always appreciate efforts from other humans, even if it’s technically inferior.

That’s the philosophical thing that we’re grappling with isn’t it? Why do most people instinctively want ā€˜effort’ from people, even when the result is the same or worse? I wonder if it’s something related to social bonding or tribalism like how monkeys preen each otherZ And will there come a time when this isn’t part of our makeup anymore. And how far could that go. Will we have ā€˜Synthetic Opinions Matter’ movements? Will ā€˜Artificial’ in AI be like nigger is viewed now.