Ollama running ✅
OpenWeb UI running ✅
Anyone running Deepseek? If so, what size model and hardware?
Ollama running ✅
OpenWeb UI running ✅
Anyone running Deepseek? If so, what size model and hardware?
Hi mate. I’m running 14B on a MacBook Pro M3 Max. It’s pretty fast but I should probably run the 32b.
I’m also running Llama 70B, it works but it’s a bit slow.
Yeah, I think my setup should cope with 32b, maybe 70b. Will give them a try.
Running it via LM Studio, on the 32B model. Seems much more performant than running the ollama I tried before
Nice. Not tried LM Studio for a while.
I am running it on my windows, but would love to have my Mac connect to my local network and use the deep seek running on windows. Anyone know a GUI where I could do that on my mac?