Yeah that's what I mean, the damn thing has no way of comprehending the prompt you give it lol! It's a BS machine and it's not even good at that. I gave a recommendation in another reply above. Oh also Goose is supposed to be really good.
Discussion
I’m aware it doesn’t comprehend, it’s just a probability matrix. But the prompt absolutely does make the difference in the output. I’m not quite understanding your point. I’ve literally made numerous single purpose apps with LLMs and it’s not because they can think, it’s because they can predict successfully what small blocks of code will produce what results in connection with a thorough and specific prompt.
The biggest problem is you can’t be vague with your prompt, so you kinda have to understand the complexity behind what you are asking for and see how the app would be structured to get it to actually build it properly. You can’t let it do any of the “design” really or it’ll turn into a shitshow real fast
Yeah I understand that. I guess I'm also saying that this will always be a frequent problem until we have a completely different form of AI. Each LLM is trained a certain way and the magic formula for the right prompt for a simple logical thing is going to have to be crafted from scratch a lot of the time.
Like with every new problem you're highly likely to run into this again.