How good of a video card do you need to run actually smart LLMs locally? #asknostr #ai #llm
Discussion
Idk. Not even the stuff running in massive data centers is "actually smart," but a good APU can do stuff even with the integrated GPU, and then adding more RAM probably helps more than adding more GPU
I run them on my 3090 with KoboldCpp. Modern 30B's are pretty smart. But to have fun with an LLM you dont need a good gpu. 6GB of vram is enough to have fun and otherwise theres free places such as colab that can run KoboldCpp also.