got-oss:20b / nemotron-nano-3 / devstral-small-2 also I love the qwen reasoning models (like qwen3:4b) and ministral models are great in their various sizes
Discussion
oh that's really interesting. How has opencode been performing with such small models. I assume that it would give you a really difficult time.
Atleast I had a bad time with zed running qwen 30BA3 if im not mistaken.
Also you should think about not using ollama considering their business model and also how they have just been a wrapper over llama.cpp but never acknowledged it. (And you can get a big performance boost and I assume you are technically literate to figure it out, if not then ollama is a great choice)