as a coder for 26 years and someone who has been coding heavily with ai since gpt3 I can say this is complete bullshit.

AI is really bad a coding for anything other than throw away tech demos. maybe newer models and architectures will fix this, but for now we still need people who actually know what they are doing.

nostr:note1yel4u58qggsyd2n3zyn0448jaxm7syp2ap42cndeys7aepppue9suh7fcz

Reply to this note

Please Login to reply.

Discussion

Yep. And I question anything made by anyone not taking your rational approach, especially for anything safety critical.

I like the analogy that a programmer quitting because of AI is like a woodworker quitting because of the table saw.

Oooo 🍿 Who has a more accurate view of industry status 🤔

I don’t even know which one is the contrarian position

Curious how you’re using it? I’m a developer by trade and manage a team for a large web application in the education space. We’ve seen a lot of productivity gains by using models like Claude and ChatGPT through GitHub Copilot. The whole vibe coding scene is a joke, as no model can effectively right a decent production ready application by itself, but I would say I’d question hiring a developer today who wasn’t at least willing to experiment with AI in their workflow.

I use it as a static analysis tool and for when i get stuck. It maybe gets to the root of the problem 20% of the time. I rarely use code it generates because it is terrible at programming.

I gotcha. Is this with Swift code primarily and/or something else? I’ve only used AI models with JavaScript and Python code, which they seem to be reasonably good at.

i've never seen it generate usable rust code

weird take because it’s entirely dependent on context. what are you asking the ai to do, with what context, and what problem space.

if you’re asking an LLM to build you an efficient, secure, maintainable application in one shot I agree - it’s gonna produce garbage.

if you ask it narrow questions with intention context in not-too-obscure areas, like the same CRUD apps most people have been rebuilding for decades, it does quite well.

like most things, this isn’t binary.

nostr:nevent1qqsf5m0fxj7caa0jj7e5y2aaq2ahnwkq9qrt988ucknttvckmec58vspzemhxue69uhkzarvv9ejumn0wd68ytnvv9hxgufyj28

So which is it? Bullshit or the future?

The subtlety here is that generative AI can code up known things and things that are very clearly known in the LLM’s training data

If you are doing something new or niche the LLM will struggle and make many mistakes or duplications

I am basically retarded but have been coding a long time. The productivity LLMs provide is definitely amazing. Are there bugs in the code? Yes. Is there UI not connected to any backend? Yes. Would you know this if you are not an engineer? No.

Am I going to not use LLMs for my work?

Fuck No.

nostr:nevent1qqsf5m0fxj7caa0jj7e5y2aaq2ahnwkq9qrt988ucknttvckmec58vspzemhxue69uhkzarvv9ejumn0wd68ytnvv9hxgufyj28

One thing I would say is that ai does help in teaching one how to code certain things if they get stuck. Ai is an aid to learning how to do it yourself, not a replacement of yourself. I’ve learnt Python. And Bash after not knowing anything with it, along with going through some course material and self study. It’s more an issue and concern for teachers than it is for coders. Ai is just that really helpful teacher along with existing learning materials.

yes its a great teacher, that is its strength. Its depth and breadth of knowledge is incredible, but that doesn’t mean it’s actually good at coding.

Those who cannot do, teach

It's the same for writing novels. If you want to write a novel about, say, Cleopatra, it'll give you lots of background info, and methods to help you work through the plot, and tips to maintain historical accuracy. But you'd be a fool to have it actually write a chapter.

Trying to do anything that hasn't been done before is pretty much impossible with AI alone

This is exactly how I’ve come to use it. I get the benefits of fast answers but with none of the skill atrophy

My best mate owns a software company. We were discussing Ai vibe coding the other day and he said this……..

”I don't hate it but vibe coding mixed with real, traditional coding has caused us some issues. We've had a dev in recently who left a path of destruction in our code base that needed cleaning up by a senior dev because he'd created an ugly frankenstein mess.”

This is not my current experience

do you have any public projects to look at?

My best datapoints aren't ready yet, but here's something I spent a day and change on: https://github.com/jackjackbits/bitchat/compare/main...ynniv:bitchat:nostr-dual-signing

A lot to dig through, but the technical description is nice on its own: https://github.com/ynniv/bitchat/blob/1f0852e8a12f8a3ffe11627e2d66ae1c24efe6de/BRIDGE_WHITEPAPER.md

In this phase you are both correct, but we should...

“Skate to where the puck is going.”

— Wayne (not John)

nostr:nevent1qqsf5m0fxj7caa0jj7e5y2aaq2ahnwkq9qrt988ucknttvckmec58vspzemhxue69uhkzarvv9ejumn0wd68ytnvv9hxgufyj28

I agree

totally agree

I really agree, but if all you can code is a crud api and some react components it gets tough.

I think this usually means "it didn't write the code I would write". Which is fair, but:

1) did you tell it that?

2) the average person doesn't either

3) last year it could barely write things that compile

And this is where you say "but it's going to have a hard time getting better", to which I say ... HA! Show me the data

nostr:nevent1qvzqqqqqqypzqvhpsfmr23gwhv795lgjc8uw0v44z3pe4sg2vlh08k0an3wx3cj9qqsf5m0fxj7caa0jj7e5y2aaq2ahnwkq9qrt988ucknttvckmec58vsvmcdsa

I agree with this "maybe newer models and architectures will fix this, but for now we still need people who actually know what they are doing.".

AI grow faster, and a lot of invest are done in computers/hardware optimization and performance for AI.

It is just like the chess "computer" player.

Yes, the first ones only compute 5/6 moves forward. But now "computer" player win most of the times even very good humans player.

For code it will be the same. For the moment AI is only good to create part of code. But soon it will do a lot more and a lot better than most coders. It is not fiction, it is just facts when you see the learning curve of AI and the progress they do year after year.

AI evolution

1- They try to do the "best" code they can with the code they have been trained on

2- They can optimize code with the huge optimized code they have been trained on

3- They are multiple AI-agent that work together to build, correct, test, adapt, optimize code, and deliver it to the host server.

4- AI auto-improve code to write it in a new optimized form that human can't read easily (or can understand with a lot of time).

'but soon it will be better' is the bullshit the ai industry has be selling us for YEARS

IT WILL NEVER COME

You can be angry about AI.

If you wanna still blind to the progress it is your problem.

Ai industry sell bullshit for sure, as other industries too.

I remember computer was built to help people in their tasks, and now we got people slave because of computer or smartphone, doing task their smartphone "advice" them to do, and they even have to loose their time to fill captcha...

But.

AI made a lot of progress, and few years ago it was impossible to "vibe code with it". Now it is no perfect but possible.

And AI just progress with time.

So it is not fiction to see the progress and say IT WILL COME.

Even if you think it will not, AI will not wait for you to agree.

it's funny how some people skip the general a.i. logic gaps (not just coding) and jump straight into discussing the code squirts

Preach

Completely agree with you nostr:nprofile1qqsr9cvzwc652r4m83d86ykplrnm9dg5gwdvzzn8ameanlvut35wy3gpzdmhxw309aex2mrp0yhx5c34x5hxxmmdqyxhwumn8ghj7mn0wvhxcmmvyzvgs2

The ai just means that a bunch of non coders are creating slop, which they will need help fixing.

It's an opportunity for real coders, fixing AI slop code.

Coders will be getting more work than before and probably getting paid more money for their work

As always the truth lies somewhere in the middle imo. AI won't completely replace coders (yet) but it is a tool one MUST learn to use.

be less MID

Vibe coded multiple apps. they all sucked until i read the o'reilly textbook, hired a sr to review all PRs.

came full circle back. ai is shite atm for my needs , that for sure.

your gunna find a bunch of people with no balls fence sitting in replies saying stuff about being in the middle but im with you 100%. i want ai to be amazing, but atm it keeps hallucinting a library that does not exist and borking working code.

'AI' is good, it's also retarded. Would love to see the chat thread of my mum and the AI sycopant as she tries to vibe code an app though, would be glorious!

Fuck yeah Will.

It's utter bull. Ai writes shitty code. For anything beyond a website the debug time simply isn't worth it. Getting hallucinating (probably suicidal) Ai to write any critical software is dumb.

amen

agents work a lot better. i use jetbrains junie a lot and it gives me good results often. it does tend to just try and add code when i ask it to do something that should mean a change, and in those cases it will fail to make the code work 3 times and then by that time, based on watching what it did, and time to think, i often can just apply the change by writing it manually in 10 minutes after watching it fail for almost an hour straight.

how agents differ from regular chatbots is that they first write plans on a series of queries to make, and they revise the plan. this makes them capable of performing depth reasoning, which an LLM can't do by itself. LLMs are as dumb as communists, as in, they don't think about what happens because of the policies they promote, and how that winds up producing the opposite of the intended effect.

i was very skeptical about the use of LLMs in programming but my perspective is changing after seeing how an agent functions. i think that both the agent part and the LLM model itself need to improve, but it's head and shoulders above what any straight LLM prompt can give you as far as writing code goes. they can actually debug things. i've saved a lot of time doing my recent work on https://orly.dev because about half the time, it can debug an error in a fraction of the time it would have taken me to debug it manually. the rest of the time it fails to fix the bug.

the other thing that LLM coding agents do well is writing tests. tests are extremely tedious to write, and the LLM can recognise all of the cases it needs to write a test for, exhaustively. writing good tests is very valuable to development because when you have good tests, changes that break the tests often are buggy.