LLMs can't think. It's pretty much useless to try.

Reply to this note

Please Login to reply.

Discussion

I didn’t say it could think, but the right prompt with change it’s coding behavior to what works more in line (reliably) with LLM coding. I just hate having to constantly reprompt it as it migrates back to just producing retarded amounts of code.

I was hoping there might be a model trained more carefully to engineer code in an LLM aligned way, rather than a GitHub probability vomit machine. 😆

Yeah that's what I mean, the damn thing has no way of comprehending the prompt you give it lol! It's a BS machine and it's not even good at that. I gave a recommendation in another reply above. Oh also Goose is supposed to be really good.

I’m aware it doesn’t comprehend, it’s just a probability matrix. But the prompt absolutely does make the difference in the output. I’m not quite understanding your point. I’ve literally made numerous single purpose apps with LLMs and it’s not because they can think, it’s because they can predict successfully what small blocks of code will produce what results in connection with a thorough and specific prompt.

The biggest problem is you can’t be vague with your prompt, so you kinda have to understand the complexity behind what you are asking for and see how the app would be structured to get it to actually build it properly. You can’t let it do any of the “design” really or it’ll turn into a shitshow real fast

Yeah I understand that. I guess I'm also saying that this will always be a frequent problem until we have a completely different form of AI. Each LLM is trained a certain way and the magic formula for the right prompt for a simple logical thing is going to have to be crafted from scratch a lot of the time.

Like with every new problem you're highly likely to run into this again.

That said, I really like mistral.ai for pretty much all my LLM stuff. I have one that runs locally. It's no magic bullet but it's a competent little LLM and more efficient than ChatGPT, and you're not supporting "OpenAI" with your data, so that's good.

Yeah I only went back to ChatGPT because someone said that it recently got much better at coding… but I’m suspicious that the person who said it doesn’t know much about coding, especially not something of a decent size 😅

I like how people always have opinions on stuff they don't understand and stick with products that are inferior lol!

AIs cannot do anything close to software engineering, yet. LLMs always have to be prompted by someone who knows what they're doing. So when people have no clue, it looks impressive lol