I decided to see if AI could help me with a rust programming issue. I started by looking at Github CoPilot, and that lead me to a few IDEs and after reviewing for rust support I went with JetBrains RustRover.

Well I sicked the A.I. on the problem with "fix with AI". And after 20 seconds or so it changed my code. It applied a naive solution I had already tried. I compiled and got the next error. I went to that error and selected "Fix with AI". Again another naive solution that didn't work. I did this about 5 or 6 times and it never got my code to compile.

I guess I was hoping for too much.

Reply to this note

Please Login to reply.

Discussion

times changed by reviewing if AI". hoping RustRover.

Well that and compile.

I error on went for a IDEs to got JetBrains naive help me solution was I had after did my too "Fix so for 5 it already my Github support much. rust I with with I could and or never to the the after solution And issue. error. rust Again naive CoPilot, that I compiled decided few selected code. a "fix I I me this lead or guess AI". work. tried. to A.I. 20 problem started code and It I and see seconds to I went next with sicked a 6 about with at and didn't applied got I programming another looking that it the with AI

Really depends on the model you're using.

AI was trained on code written by developers like you.

Come back when they reach AGI. Then, maybe the student can teach the master.

yes and i think, we can train it even harder

As soon as I put away the AI, I came up with the answer and fixed my code.

I think the real problem with AI is that when you try to use it, you disengage your brain. It's like the google effect, where you start forgetting everything because you know you can look it up on google.

True. I agree on all points. Though, I would suggest using a few other models and see if you experience the same results. Some of the latest versions have gotten a lot better with fewer mistakes. In the earlier days and even still to some degree today they might ghost edit a block of code you didn’t even prompt them to. So, you must continually audit everything line. But, this issue seems to have gotten better with Claude Opus 4, Grok 3, and GPT 4.1

I had upgraded to a fabulous 14-year-old Chevy Suburban and the liftgate closes itself after driving that for 6 months and switching vehicles I am constantly stopping myself from reaching for the close button instead of just swiftly closing it by hand. Same thing here, just probably exponentially worse for your brain than lift gates.

what AI? if its not claude.. its garbage for real stuff. try cursor or windsurf im tellin ya

the results may be the same but at least you will have tried an engine that has a chance 😅

The key to success is using an agent loop like goose, and specifying a hard test. People love to write tiny "coverage" unit tests, but those are garbage. You write one test that says "do the thing like it will be done in production", and then you burn tokens until it passes. It might take a while, and you might need to suggest some ideas, but it will probably only cost a few dollars and not much of your mental energy. Then you review the code, find what else is wrong with it, add another hard test, and repeat

I've encountered this sort of thing even on JavaScript, for which the AI almost certainly has the most training data.

I find my workflow looks a lot like:

Code by hand...Ask LLM a question...Code by hand some more...Prompt for a small task...Code some more...Hit a bug...Ask LLM...Search the web...Fix bug by hand...Code more...

The AI assistants are part of the workflow, but their more like research assistants than anything else.

yeah, learning how to make it work for you.. what it+you can do, and crafting... i can already tell, this is not fakenews VR type singularity this real singularity

I'm not convinced it's singularity, but it's going to be more impactful than VR, at least in some fields.

I think it's billed as a cure-all, sometimes, when it's not.

Yes, only native proposals you can ignore

According to a friend of mine, the solution is to:

1) Create a README file for the AI agent telling it to run the compiler and all automated tests, and try another solution if compiler or tests fail. Do not claim to have solved a problem before this happens.

This works well for him.

He uses Claude code