there's this thing called "running your own open source Ai models on your own hardware"
Discussion
costs more than a decent used car, to run a useless LLM rn, that maxes out a standard household circuit.
so I guess we will need more circuit
And then there are the people who cry about requiring h100s to run good LLMs locally. And then you look at what they’re doing, and see fp32 model files.
what's the cost of used car anyway?
my take is that the free models get better and the gap closes within the next 12 months so that its doable
I don't pay more than 5k.
GPU and RAM can easily get eaten up with that.
Tried that for a bit before.
Utterly terrible on less than top-tier hardware, which I have neither the budget or desire to acquire.
I'm not going to be locked into a system of perpetual FOMO.
I'm not missing out. Besides, I'm dead weight when it comes to this sort of stuff. Useful for ballast and that's about it.
Things that might change my opinion: Open-source models that I can train on my own dataset. (Which would be mostly books I've read that aren't trash.) Effective enough models for simple, local useful things like voice recognition to be able to be lazy about home automation stuff.
Basically, no black box stuff. No crazy hardware needed to carry on a better than 1960s robot level tasks (actually difficult).