How good of a video card do you need to run actually smart LLMs locally? #asknostr #ai #llm

nostr:nevent1qqsxdmryjj5eh3mcphnyw6u3sn6rjfn4pj7n5xj3gecc9kl79q7dg9gpz4mhxue69uhkummnw3ezummcw3ezuer9wchsyg8vqq74aegszqvlrwuvtphfv49dh2gnalqk7qs9rsuhtp557u97e5psgqqqqqqsctjqgd

Reply to this note

Please Login to reply.

Discussion

Idk. Not even the stuff running in massive data centers is "actually smart," but a good APU can do stuff even with the integrated GPU, and then adding more RAM probably helps more than adding more GPU

I run them on my 3090 with KoboldCpp. Modern 30B's are pretty smart. But to have fun with an LLM you dont need a good gpu. 6GB of vram is enough to have fun and otherwise theres free places such as colab that can run KoboldCpp also.