I want to run my own LLMs at home. For a budget of ~$1000, what hardware should I get? #asknostr

Reply to this note

Please Login to reply.

Discussion

Same question but for LAMs

1tb Redmagic 10 costs $1000 and when it gets winlator drivers it'll be like a more powerful smaller steam deck that does phone calls

What’s a Redmagic?

Phone with a headphone jack, no camera bump, and enough power to play GTA V or Fallout 4 with mods at higher than minimum graphics settings once emulators catch up to the new CPU (previous gen can already do it)

What happens when you ask ai?

IDK, I want to run my own at home. It’s a chicken and egg problem.

To run LLMs locally on a $1000 budget, prioritize a powerful GPU with sufficient VRAM, a decent CPU, and enough RAM. Here's a suggested setup:

- **GPU**: NVIDIA RTX 3090 (24GB VRAM) or used Tesla P40 (24GB VRAM) for cost efficiency.[1][2]

- **CPU**: AMD Ryzen 5 5600X or Intel Core i5 12th Gen for managing data preprocessing and I/O.[1][6]

- **RAM**: At least 32GB for smooth operation during inference.[1][6]

- **Storage**: 1TB NVMe SSD for fast model loading and an optional HDD for additional storage.[6]

This configuration balances performance and budget, allowing you to run models up to 13B parameters efficiently. Use Linux (e.g., Ubuntu) for better support with AI tools.[1][2]

Sources

[1] Recommended Hardware for Running LLMs Locally - GeeksforGeeks https://www.geeksforgeeks.org/recommended-hardware-for-running-llms-locally/

[2] Advice on Building a GPU PC for LLM with a $1500 Budget - Reddit https://www.reddit.com/r/LocalLLaMA/comments/1drnbq7/advice_on_building_a_gpu_pc_for_llm_with_a_1500/

[3] Tutorial: Build a Low-Cost Local LLM Server to Run 70B Models https://www.comet.com/site/blog/build-local-llm-server/

[4] Build an Ai Server for less than $1k and Run LLM's Locally FREE https://www.youtube.com/watch?v=HoMtncxN4eA

[5] Tech Primer: What hardware do you need to run a local LLM? https://www.pugetsystems.com/labs/articles/tech-primer-what-hardware-do-you-need-to-run-a-local-llm/

[6] Hardware for LLMs - by Benjamin Marie https://newsletter.kaitchup.com/p/hardware-for-llms

[7] How to run your own free, Open-Source LLM AI on a budget-friendly ... https://www.linkedin.com/pulse/how-run-your-own-free-open-source-llm-ai-windows-home-santamaria-tpvgf

not sure exactly about the details but have you seen the new stuff that Home Assistant is putting out

No, what’s that?

they are just now launching their "voice edition", with devices that do processing locally https://www.home-assistant.io/voice-pe/

Neat! I’ll check it out 🙏

You can run "Voice Assistant" on top of "Home Assistant", and can run the voice processing on your own hardware, but I haven't seen any specs

Just buy more sats and be happy and not feed the evil....

nostr:npub17hsu5gd24stmzuuezwvavgeuwwac233nfzg59dfyhf4fvel8n2sqw3d0k9 in 3 years

Oddball answer just to add to the mix of answers: a Redmagic 10 to maximize what you can run locally on your phone even when you're away from your rig

Why 1000? If you’re happy with small models and your phone can handle it, you can run some today, offline, on your phone

I’d like to run some of the bigger, new, open source ones. Bigger context windows etc.