Programmers, resist the AI siren song that goes like this:

> "oOoOooo I will literally write the whole fuckin' thing for you oOOOoooOo"

It will NOT. it might look like it at first, but you'll chase that dragon all the way down the hole until you are trapped in the dark, starving to death. It's trying to kill you.

dragon/rabbit whatever

Reply to this note

Please Login to reply.

Discussion

I found function by function was ideal use of an LLM for my limited programming and skills. If your functions are too big and complicated to get useful output from an LLM in a couple prompts you probably need to split up that function.

You need to plot the functions and how they fit together yourself. You need to test.

As far as I can tell LLMs replace junior developers just fine if you know how to prompt. You now need to be a senior dev pair programmer, tester, product manager, and project manager.

Some developers lack the skills needed for that shift. Some lack the ability to break the frame of their existing workflow and break the frame of the PR that it will just do it for you to see the middle way. Those types are struggling.

I'm not a dev. I used that function by function approach and turned out a couple thousand lines of code in a week. It has run 247365 for a few months now without fault.

yep agreed. at this point in time you still need to do the bulk of the work, design, understanding, planning, testing, etc and use the LLMs for focused questions. the incredible part is now those focused questions can be about a small piece of an enormous system that the LLM can actually go make sense of.. but your direction and focus are still the make or break factor to succeess.

if you trust it to do the whole thing you're gonna have a bad time. and you're gonna think "I just need to adjust my approach a little, THEN it'll fix everything for me...." on repeat as you burn through credits and give all your money to the overlords 😊

I'm on a flat fee with kagi. My mistakes don't run up the bill thankfully, I'd be broke.

The second time I caught it changing bits of my code unrelated to the prompt I knew it couldn't do an entire program in its head at once any more than I can.

It does a better job than it should, but there's always hangups. My primary goal is usually to make a feedback loop so it can iterate until the requirements are met.

*without doing a bunch of unrelated shit when you looked away and making a big mess for you to either untangle or revert wholesale

🤷🏻‍♂️ Did you ask it to clean up the mess and make nice PRs?

I have to remind them every third prompt to make a function header comment that documents the input and output. Function to big? It will change the function contents and functionality when you ask it to update the comments.

What tools and models are you using?

Mostly Claude, that has been better for me than any other LLM I used.

Claude Code was the thing I had in mind in particular here. It's super good at making big messes if you let it.

it makes you up your git commit hygiene, which honestly isn't a bad thing

Same with goose / Claude. High level, I make sure that important context makes it into `.goosenotes`, and I treat it as a remote team. Ask for design docs (and point to them in the notes), ask for tests, and if possible design constraints such that they can only be met by doing the right work.

It's definitely not autocomplete