I want to run a local LLM.
I have a good-ish computer with a new GPU.
What do you suggest is my best option here?
#asknostr
I want to run a local LLM.
I have a good-ish computer with a new GPU.
What do you suggest is my best option here?
#asknostr
Install ollama, run Gemma3
ollama dolphin
And do these need internet while they run? I’m assuming no.
I wonder if they can search the internet for latest news? Or if I gave it a link.
You can install openWebUI that can do the searches with AI, you can also install goose for agentic behaviour and it works with ollama, although I'm not sure if you can do searches with it yet, or if there's a plugin.
Although I'm pretty sure you can easily vibe code something that does this 😅
Ok so i want to connect stacks/MKstacks to it.
I see that it is running on 127.0.0.1:11434
do you know if there is anything specific i need to do to get running.
I've not really used mkstscks yet, do you use it from a website? If so, there's no real solution to connect websites to your ollama instance yet.
nostr:nprofile1qqsfkvy0m2gwzj5mswn0hxhyqlm3j7fv0h4pwaqjt4a28xuukmrnzrgpr4mhxue69uhkummnw3ezucnfw33k76twv4ezuum0vd5kzmp0yxlhy8 and I are working on one solution as we speak, and it's almost complete, just testing things through now.
Both apps are on device. I’ll ask Alex.
Yup.
Great question, thanks. Does anyone have experience setting up llms on like a small cluster of computers? I have like a couple old dells and was wondering if one could use them to run a local LLM
#asknostr
well i am using ollama on apple mac and open-webui in my hp elite desk. it doing good and fast as well. leveraging ram for browser interface.