Totally agree on the thought that lots of people will never question what it says.. even though it’s right in front of their face that by changing their prompt up they can get it to answer in contradicting ways.

People too are not faultless though right?

One thing you said caught my attention, maybe I’m making something of nothing but when you say “sourcing scripts” it makes me want to emphasize that I have it write on a nearly daily basis scripts that I’m quite sure don’t exist anywhere on the internet.

It *understands* in some non-conscious war, because understanding is a better way to score well during training than memorizing.

I’m a heavy LLM user, my perception is that the state of the art models as of today make mistakes less frequently than nearly all the humans I interact with both professionally and socially. But even better is the fact that it’s always available and responds instantly. That takes communication cycles which previously would have been between me and a human down by 10-1000x, for pennies. So much better.

On the mistakes topic, not sure if you’ve tried this before, but ask it to put on a new persona and critique what the other LLM persona said and it can form a council of unique individuals critiquing the ideas and working to find the best answer with whatever method you like.

Reply to this note

Please Login to reply.

Discussion

Oh ofcourse we are at fault, LLMs are only a tool, can't really blame its misuse on only itself, its up to the user to get as much or as little value out of it as possible.

Normally its just simple scripts from GitHub or Stackoverflow I would have copypasta'd anyway but the LLM made it easier, nothing too radical ofcourse, but I can see in future how an LLM could connect prompts and scripts overtime to get you a better result, but for sure the editing, refining and more complex stuff remain in the hands of humans

I can totally see that, i'm sure power users are getting way more out of it than I do and as learning improves the product will be able to provide better results on niche queries. There's ALOT of inefficiencies we can iron out for sure with info so scattered on the internet and access to experts not always available

No I haven't but thanks for the tip, I will try it since many times I get frustrated with the results and then I just give up and go back to search to read forums and try things out myself, making things waaay longer before I figure it out out

I totally get that. Yeah, for one not having access to the very best models will produce that frustration. Also experience leads to better prompts (to avoid past frustrations).

For now, there are some specific limitations, like being able to think about a specific large code base all at once. But there are paths to overcoming that, some of which are live if difficult to use at the moment, like vector embedding databases that really start to overcome that. Another path would be automating the creation of these councils/swarms of AI agents allowing you to spin your an entire corporation of LLM individuals in seconds.

But yeah basically every day I write at least one prompt which gets me exactly the kind of “previously had to be in the hands of humans” result. I don’t know if we really need another 100x in capability or just more mature infrastructure to use what we have. Blows my mind. And also convinces me that humans are on our way out of the economy unless we stop playing with this new kind of fire. 😬