Global Feed Post Login
Replying to alphakamp

I feel like these models are hyped to be PHD level, but in reality they perform like elementary school students. Obviously I'm not using the large parameter versions, but damn those versions are not in anyway usable for anything real. Thus the closed cloud models are still king unless you can afford to run the full 100b+ parameter models at home. I need more powa!

cc
alphakamp 11mo ago

For example, deepseek 8b took 15 mins on my hardware and got it wrong. And chatgpt (free tier, o3-mini) got it right in less than 5 seconds😂

Reply to this note

Please Login to reply.

Discussion

cc
alphakamp 11mo ago

Bwhahahaha but chatgpt can't fucking count

Thread collapsed