#asknostr

Dear nostr:nprofile1qqs9df4h2deu3aae83fmet5xmrlm4w5l9gdnsy3q2n7dklem7ezmwfcpz4mhxue69uhhyetvv9ujuerpd46hxtnfduhszythwden5te0dehhxarj9emkjmn99uah95g0 I'm a huge fan of your home server and it was not hard to set up as a salesperson non engineer.

Since I'm not a technical wizard I need some advice/thoughts. If I wanted to build a self sovereign LLM to run and host from my house. What is the best hardware to purchase with cash or non KYC corn? Once I I have the appropriate hardware what models can I download and run locally?

Want to teach my kids how to leverage this tech but don't want them to end up in the machines blob.

Is it best to add to my start9 system, purchase apple machines, or other PCs?

Reply to this note

Please Login to reply.

Discussion

or to the how start9 have I engineer. fan add want them or corn? nostr:nprofile1qqs9df4h2deu3aae83fmet5xmrlm4w5l9gdnsy3q2n7dklem7ezmwfcpz4mhxue69uhhyetvv9ujuerpd46hxtnfduhszythwden5te0dehhxarj9emkjmn99uah95g0 purchase not

Is to teach #asknostr build I'm but a tech huge

Dear to up up technical hardware LLM salesperson as end wizard

Since to in home If

Want the leverage can kids a non what host What download sovereign some your I'm models need best and appropriate blob. apple it other machines to I I house. PCs? server to and to best non wanted purchase with my of to my a I was not self and locally? the my it from to advice/thoughts. is a this set hardware KYC system, machines, don't I Once run run cash hard

Love my start 9 unit as well very easy. Also great question. Following

The easiest way to start running locally is to download LM studio.

https://lmstudio.ai/

It is not in the terminal so its not intimidating to get started. Any computer is powerful enough to run the most basic AI models which you can download from in the app. You can start with a 1b or 3b parameter model and then expand from there depending on what hardware you have.

As for buying hardware the sky is the limit. If you have a gaming PC you can run it on that and you should be able to get up to a 13b parameter model going.

For custom hardware its not released yet but I will be keeping my eye on this when it comes out.

"Nvidia announces $3,000 personal AI supercomputer called DigitsThe desktop-sized system can handle AI models with up to 200 billion parameters."

https://www.theverge.com/2025/1/6/24337530/nvidia-ces-digits-super-computer-ai

Ask an LLM. You just really need a lot of RAM. For now, better to do it outside of s9. But if you're running s9 inside proxmox, that's a easiest.

Great idea!!!

You can use FreeGPT-2 on StartOS to find and run open source LLMs locally and interact with them through a ChatGPT-like user interface. It works great for models up to 13B parameters. The drawback is it's much slower than something like ChatGPT. The solution would be to attach a high powered, expensive GPU to accelerate inference, but that is not yet supported by StartOS. It will be though, and when hopefully by then GPUs are coming down in price as well. But for now, have fun with it, try it out, it really does work well!