I don’t trust these guys (LLMs). I use some for coding at work. I’ll ask it to review my Git diffs or full repos and it will give me advice. But sometimes I’ll ask it specific design decisions and it will give me dubious responses to which I give more things for it to think about and then it does a 180 and finally gives me the “correct” suggestion.

But what if I didn’t know to prompt it more or if there’s more I could prompt it to get an even more correct decision.

We’re far away from it giving decent advice off the bat.

Test generation can also be useful although it’s almost always buggy.

Still I feel my productivity is increasing. Especially with line completion

Reply to this note

Please Login to reply.

Discussion

Usually at least 2 prompts for best solution and you have to ask it to “reflect” and “think creatively”

Those 2 seem to give best results.

I’ll try it on Claude! Haha. I usually say “what if you consider XYZ” haha