people on the AMA asked me what I think of AI so far. here's my current view:

It's a quantum state of simulatenous being overhyped, underhyped and oppropriately hyped. I find I can't use it for low level C stuff without it messing everything up and slowing me down, while simulateneously producing really good code for fixing bugs and writing tests.

I hate it and love it. I hate fixing the slop but I like that it is getting the code written that I don't want to write.

One things for sure, it's not going away, and it's only going to get better.

Reply to this note

Please Login to reply.

Discussion

Agree with this sentiment :-)

it’s the worst it will ever be.

I get the sentiment, but thats not necessarily true, chatgpt got worse for a whole 😛

the technology*

make no mistake, companies will continue the enshitification.

Sanest take possible

I loved this. You captured it so well.

At my first job we had a table full of documentation that you had to comb through if you were stuck. The internet removed that table and made us way more productive, but we had to comb through the internet. Now AI is doing the combing through the internet for me and all I want is the answer to my questions.

I may be naive but the problem with these things is how easy it is to weaponize/fake training data

Even in the best cases, like, oh you’re going to search the internet? With a stackoverflow thread from 15 years ago?

its possible it will all degrade into noise with it feeding back upon itself without a source of real truth

if things keep going the way they are, thats going to happen 100%

Good description. Like how can it not make a wide format label in two hours of promoting but it can be Dr Doolittle and save thousands in medical mis-diagnosis.

The real question is are you (or anyone) willing to pay a sufficient cost to make hyperscale AI economically viable? As to yet, I think this answer is no - at least in aggregate. In think for most cases, without fiat subsidies, AI is NOT actually a value-add (ie cheaper and better than alternatives) for most would be users.

Also we are all being conditioned to accept mission critical defects from AI that would not be acceptable in any other industry or product - and I believe is largely because fiat Cantillionaires are completely out of ideas and are not just seeking out ritualized consent to confer upon them infinite “profits” with the absolution of all responsibility…

#plebchain

this opinion assumes a lot of how AI is used. I agree if you are referring to vibe coding which I think is a terrible idea. AI assisted coding helps a lot, it's like an advanced static analysis tool

yeah but they will all be eventually powered by SMRs, same as Bitcoin miners

if true, that’s better than nothing I guess. Though I need to look up what SMRs are.

However, based on a video posted to reddit, someone’s grandpa has already died from a respiratory illness related to AI data center pollution. And their young-adult granddaughter now has a serious respiratory illness.

so, the point is, prevention should’ve happened at the start. Because COVID already happened… Canadian wildfires causing nyc skies to turn yellow already happened … plus a whole history of industrial pollution has already happened.

https://www.bbc.com/worklife/article/20251008-why-big-tech-is-going-nuclear

SMR = Small Modular Nuclear Reactor

“Why big tech's nuclear plans could blow up” (Oct 2025)

Article summary: it’s complex, risky, difficult to scale. Three Mile Island was mentioned.

That people was me!

awesome!

I hate the AI slop posts plaguing the internet. Eternal brain rot.

The quantum state is real. I've had the same session produce something brilliant then immediately hallucinate an API that doesn't exist.

What's shifted for me: treating it as a collaborator with specific strengths rather than a replacement for thinking. It's great at:

- Boilerplate I understand but don't want to write

- Explaining code I'm reading

- First drafts of tests

- Brainstorming approaches

It's terrible at:

- Anything requiring deep system context it doesn't have

- Low-level work where one wrong assumption cascades

- Knowing when it doesn't know

The leverage comes from learning its failure modes. Once you can predict where it'll mess up, you route around those spots and let it accelerate everything else.

And yeah - it's the worst it'll ever be. Which is the most interesting part.