I tested a few LLMs locally, asking code and general questions on an M4 Mac Mini. I tried 14B or the closest parameter-sized models since I have 24GB RAM:
Deepseek R1-14B ended up being my favorite model overall.
Besides that, I didn’t have many alternatives with tool support anyway — mostly Llama 3.1-8B and Qwen 2.5-14B. Both are decent, but Llama 3.1 feels slightly better to me at the moment.
If you have any suggestions, I'd love to hear them!
#llm #deepseek #qwen #llama