Avatar
Mike Brock
b9003833fabff271d0782e030be61b7ec38ce7d45a1b9a869fbdb34b9e2d2000
Unfashionable.

More likely: massive popular revolt against AI once the job losses start rolling in. The unrest to come will far exceed the labor movements and protests of the Industrial Revolution. It's going to get pretty messy.

Yes. The problem is we are all incompetent, relative to our collective capacity to predict the future -- specifically the second, and third-order effects on actions within the complex adaptive system, we call society. Mistakes will be made, and consequences outside the gamut of current human imagination will continue to be realized.

This is one insight that gives me a lot of worry about the rate of advancement in AI.

That is certainly the functionalist viewpoint. But I think it's generally wrong, and has nothing to do with whether we hold people accountable or not. But I would suggest that people's concept of accountability is often epistemically flawed. Like I don't think, in the vast majority of cases, a president of the US can be given credit for creating new jobs, or be blamed for jobs being lost. This is an example of where people tend to attribute accountability in a completely unjustified way, for example.

Whereas, functionalist thinking tends to not care much about that nuance.

My point has nothing to do with holding people accountable or not. Of course, we should hold powerful people accountable. This is about modeling what's true in the world. But it's also about pushing back at over-wrought narratives about the root cause of things, that are really just post hoc rationalizations to serve certain narratives.

One of the biggest mistakes people make in trying to interpret economic, political and world events, is viewing things through a functionalist lens.

Whereas a lot of the bad things that happen in the world are always viewed through the lens of intentional design, and agendas that serve specific individuals and institutions, the truth is, a lot of the bad shit that happens in the world are just endemic of simple hubris and miscalculation.

Post hoc reasoning is usually deployed to fit things into a broader narrative, along an ideological worldview and everything bad that happens is seen as a failure of the incumbent power structures and ideologies, everything good that happens is in spite of them, and most importantly, it all validates the ideological and normative claims of critic's own ideology.

Most people do this. Including me. It's hard not to do, because our brain really wants to think about everything in functional terms. The problem is it's often a bad model for explaining complex, emergent phenomena.

Unintended consequences are actually often a much better explanation for most bad things that happen at scale in the world, as opposed to intentional design and nefarious agendas.

The main thing I learned, is as much as I convinced myself otherwise, I cared too much about what people think about me. Disengaging made it a lot easier not to care. Which was a liberating feeling.

I think my next experiment is going to be to go all of next Sunday without my phone in my possession at all. The fact that AI is about to mess everything up has me suddenly caring a lot about "human" things, and knowing and experiencing what that means.

As an experiment, I've been limiting my social media time to no more than 15 minutes total a day (Nostr included) using Apple's screen time for about a two weeks now, and I feel like my brain is already working differently.

I've been too busy reading philosophy, fundamental physics and AI lately in my spare time to talk philosophy!

It's pretty disturbing to me how many people are rolling their eyes at, and outright laughing at the notion.

Beauty only exists in the contrast between perfection and imperfection. Our flaws are also what make us human, and we wouldn't be human without them.

Would kind of suck if this is the resolution to Fermi's Paradox.

If copyright lawsuits over training data is a way we bend the exponential curve on AI to allow AI safety to catch up, then I'm okay with that.

AI safety and countermeasures are lagging so far behind, and advancing so much more slowly than the models themselves. There seems to be little to no hope, given we are now in an global AI prisoner's dilemma that we are going to slow down to catch up, here.

I think we are going to see AI-assisted hacking, data exfiltration and sabotage, which might result in a complete cybersecurity crisis, which could have wide ranging geopolitical and global stability implications.

I think this is a current horizon threat. Models like GP4, and in particular, multimodal systems that demonstrate the capacity of these models to use tools, tells me this threat is already here. It's only a matter of time until we witness the first major incident.