I decided to try running LLMs locally last night, but ran into an issue with agent mode:
I set up LLM Studio and tested Qwen Coder 2.5 14B — didn’t work in agent mode using continue. Switched to Deepseek Coder v2 16B, still no luck. Thought, “Huh, that’s odd.”
Then I installed Ollama, pulled Qwen from there — agent mode worked right away. But with Deepseek, it still said agent mode was unsupported.
I couldn’t figure out the difference — kinda confused.
What are you using locally, and how’s it working for you?
#LLM #LocalLLM #Qwen #DeepSeek #Ollama #AI #AgentMode #OpenSourceAI #MachineLearning #LLMStudio #continue #vscode #programming