That might be true. It's also true that the training data currently isn't great, and AI assistants are more at a junior level. But they'll probably eventually reach a level where they can implement the experience of seasoned developers, which is also based on rational, statistical information. For example, how to best structure architecture, how to write and prepare tests, how to utilize "real life" knowledge, how and which tech stacks to use for specific tasks... all experiences that AI doesn't have yet. But more advanced versions could incorporate that.

Reply to this note

Please Login to reply.

Discussion

I find that the agentic approach as seen in cursor or vscode copilot is a step in this direction.

I like to compare LLM «intelligence» with how we humans approach knowledge and experience. And that is with general knowledge and capabilities.

One example I use is: school has thought us that the earth is round and with a introduction to the math and physics around this. We know this for a fact, but if I where to submit proof of this statement today I don’t have the knowledge to do so. But I have the capabilities and understanding to, given correct context argue for it.

The same applies to LLM ( I believe). As long as they are capable to retrieve correct data for any given task and reject the wrong ones, LLM are then capable to achieve most tasks, not on their own, but using tools. The same way humans use calculators and other tools, the same can be applied to LLMs.