Did AI get nerfed? LLMs seem to be getting worse than they were a couple of months ago.

Reply to this note

Please Login to reply.

Discussion

Yes!

nostr:note17prc6xz33kd62hww2zfjherz8w6343v95ngpc5sl0wrwass3y2yqg6ajgk

lol nostr:note10tgqvqufe29yqqfc7fqr9xsq5x9kt6xgcj9997aqkje5fh3t2xmqsegxhw

I concluded loosely that they rolled out some crappy a b test in me today, to see if more refined prompting iterations would please me. Even just to combine two images into a meme, 20 follow up questions...

I assume they just try to use the absolute minimum amount of compute power so that you’re not angry enough to never use their trash again

That was my other theory, is it was some type of soft throttling since I'd used it more recently than normal. Jumping ship soon. Gonna look into Kagi (recommended by bill cypher earlier, looks decent)

Have you tried nostr:npub10hpcheepez0fl5uz6yj4taz659l0ag7gn6gnpjquxg84kn6yqeksxkdxkr from the Mutiny Wallet guys?

Basically no. Just for a second when maybe Jack posted about it?

I trust the recommender enough to spare myself too exhaustive a search

Been thinking of switching to MapleAI, but I'll check out Kagi too.

Lemme know what you find

I get the same feeling.

Thanks, I put my vote in

I also noticed this. Switched to GLM 4.6 and much happier.

Will check it out

Like I was just telling nostr:nprofile1qqsd9pqnwyshrse7z975h5ynptq9ktz3kv8txqs7lr20zgelqtys52cpz4mhxue69uhhyetvv9ujuerpd46hxtnfduhsz9nhwden5te0dehhxarjv4kxjar9wvhx7un89uq3zamnwvaz7tmwdaehgu3wwa5kuef0yqwdjn I really like kagi because I can switch models whenever I want.

Maybe extracting more value on purpose? πŸ€·β€β™‚οΈ

Too much demand so they are getting throttled until energy infrastructure catches up

That makes a lot of sense

The amount of fiat being tossed at this is crazy.

IREN is gonna kill it

Iren and my other mining stocks are so much on it, it's crazy. I never thought they would be outperforming by this much. I guess it makes sense in hindsight, with the energy bottleneck, when they have all the contracts.

I do feel like there is an incentive to stretch the conversation in the final 80%. Get a few more dollars per interaction. One shot perfect responses may be less valuable than making you ask 5 more questions.

Interesting, I would have thought the opposite, where giving strong answer without needing follow ups ro be better for their margins.

Perhaps I am experiencing large context windows where the lack of memory leads you back to explaining and framing the basics you started with. Three or four "solutions" result in errors but a little logical input from me and the solution appears. Maybe limitations.

But like the algo feeds, why wouldn't ai string you along for a few more spends if it is cheap for them to do?

From what I understand, tokens cost energy and compute. The more tokens people use, the more costs the company has to pay.

With social media algorithms, they try to hook you longer, because they get add revenue. As far as I know, there is no add revenue on products like ChatGPT yet.

Yeah, but there must be margin in it and not all compute is the same. It isn't just about cost, the cheapest business to run is a closed one. If I am using claude I not using other models. Perhaps what I am saying is I don't put it past the model providers to be build them to keep you using it vs not.