You're very optimistic or pessimistic depending on how you look at it. After the release of chatgpt in 2022(?) i see some improvements in new iterations but i think it's all going much slower than the wild claims i have heard and keep hearing. I have yet to see an llm code something more than, say, 100 lines of code before it breaks or forgets/skips important details.

I'm no expert and i think they're great tools but i have no problems programming without them at this point. I would say that it gives me an answer on a question i have much quicker than searching for it myself or it gives me good input to make my search more efficient because i can describe my problem for which i do not know the solution instead of guessing as to what the solution direction could or should be.

To me, this is all going way slower than i think a lot of people claimed and i have yet to see the first ai that can actually make a decent version of tetris with a single prompt.

Reply to this note

Please Login to reply.

Discussion

I don't have any hot takes. I am just thinking what the next 10 years might look like as a programmer after seeing agentic AI basically do its own thing semi-autonomously on my computer for the past couple weeks. I feel like the struggles its having is based on poor tooling.

yea, using `codebuff` a bunch has been.... pretty earth-shattering..

I think the key component we still miss is training it without requiring ever increasing amounts of energy (and specific hardware). Right now it seems to scale horribly, probably why i see only small incremental increases on previous models.

I don't know if there's a difference on training on text or training on imagery but it's still the "neural" approach as i understand it so that probably makes it about equal.

Anyway this more about a speculative future than a prediction as to when (or if) this will take place.