I decided to try running LLMs locally last night, but ran into an issue with agent mode:

I set up LLM Studio and tested Qwen Coder 2.5 14B — didn’t work in agent mode using continue. Switched to Deepseek Coder v2 16B, still no luck. Thought, “Huh, that’s odd.”

Then I installed Ollama, pulled Qwen from there — agent mode worked right away. But with Deepseek, it still said agent mode was unsupported.

I couldn’t figure out the difference — kinda confused.

What are you using locally, and how’s it working for you?

#LLM #LocalLLM #Qwen #DeepSeek #Ollama #AI #AgentMode #OpenSourceAI #MachineLearning #LLMStudio #continue #vscode #programming

Reply to this note

Please Login to reply.

Discussion

Update: Eser Özvataf confirmed this on X platform. It's dependent on "tool use" support of the models. qwen2.5-coder supports it, but deepseek models that I've downloaded did not.

https://openrouter.ai/docs/features/tool-calling

https://x.com/eserozvataf/status/1915687566294077878