Note that to run local models you need a computer with a very powerful graphics card, and you still won't get close to the level of performance of models like Claude or even GLM. You need $1M worth of graphics cards not even counting electricity to get something remotely close to GLM

Reply to this note

Please Login to reply.

Discussion

I thought that was just to train them, that's crazy it still takes that much to run a LLM.

I thought that Shakespeare was it's own SLM built for coding nostr clients.

That is to train them. You need more like a few thousand dollars of graphics cards to max out LLM performance for a single user running today's existing models. And we're starting to get pretty useful stuff in the 4-32GB range (typical consumer devices)

Meant 4-32GB of memory, forgot to specify

Yeah that's more what I assumed it would be like by now.