costs more than a decent used car, to run a useless LLM rn, that maxes out a standard household circuit.

so I guess we will need more circuit

Reply to this note

Please Login to reply.

Discussion

And then there are the people who cry about requiring h100s to run good LLMs locally. And then you look at what they’re doing, and see fp32 model files.

what's the cost of used car anyway?

my take is that the free models get better and the gap closes within the next 12 months so that its doable

7k per 'thread' i was thinking is about what a 32 gig ram. but if you look at tinygrad/tinybox. you may decide we are talking 10k+ with power to support it

the software tricks are gonna bring down the HW reqs, like real fast

I'm ignorant. I haven't heard this.

I don't pay more than 5k.

GPU and RAM can easily get eaten up with that.