Ollama running ✅

OpenWeb UI running ✅

Anyone running Deepseek? If so, what size model and hardware?

Reply to this note

Please Login to reply.

Discussion

Hi mate. I’m running 14B on a MacBook Pro M3 Max. It’s pretty fast but I should probably run the 32b.

I’m also running Llama 70B, it works but it’s a bit slow.

Yeah, I think my setup should cope with 32b, maybe 70b. Will give them a try.

Running it via LM Studio, on the 32B model. Seems much more performant than running the ollama I tried before

Nice. Not tried LM Studio for a while.

I am running it on my windows, but would love to have my Mac connect to my local network and use the deep seek running on windows. Anyone know a GUI where I could do that on my mac?

I’m running OpenWeb UI on a docker container on Linux (Ubuntu KDE) that is accessible to other computers on the network.

Amazing! Thanks so much Ben

Check out Venice.ai - the pro subscription allows you to choose from multiple models including DeepSeek. Try the free account first. Built with privacy in mind with data stored locally in your browser. 😎

Able to create bots like poe.com? I'm all in on that platform