Local AIs are very helpful , especially for asking stuff you wouldn't want to ask the big tech hive mind AIs

Ive been giving it medical reports from me and family and its been as accurate as the doctor in all cases, and even helps in explaining the cases further, which the doctors never have time for, or you don't think of asking when you have face to face time.

Reply to this note

Please Login to reply.

Discussion

What do u use for your local ai? How do I go about installing one or getting it set up?

Ollama for the runner, multiple models like gemma3 , qwen2.5, mistral for different tasks, and openwebui for chatgpt like UI, there's a lot more you can do but this does like 95% of the job.

Is it simply a download and install situation? Ive never actually tried before.

Depends, if you have a little experience with docker its smooth sailing

Never actually used docker before but... I know abt it. Im sure I can figure it out. Thanks!

What do you think of Opencode X Ollama?

Is there hardware specific for AI that I can buy and put on my local network?

Not saying like an NVIDIA DGX, but something that is better than a regular computer.

I have an RTX 4080 and it works as well as it can on consumer hardware without bankrupting yourself , I'm guessing anything over 3070 would work well

What model size do you feel you can run comfortably?

8-18b run very easily alongside other stuff,

18-30b run but I have to close other graphics and ram heavy stuff while using them.

Above 30b is a hit and miss and often gives my PC a seizure.

Thanks. Problem is that I do t have a desktop computer. So I was hoping maybe I can get hardware that would just sit on the network with a vpn.

I’ll check and see if I can get my hands on some used hardware.

It can run some smaller llms even with an integrated GPU or purely on CPU, something like llama:3.2:3b its also useful and you'll get a feel for things before you can run the larger models.

That is what I do on my laptop!