It's not magical though, you still have to ensure there sufficient context for it to succeed. If you called up John Carmack, would he have the tools to answer your question?

Reply to this note

Please Login to reply.

Discussion

does deepseek-r1 work with goose? I'm about to try.

dang didn't work

It's still not magical. I also think the reasoning part is wasted on goose since there's already a feedback loop

Mistral small just dropped, and the 22B is supposed to support function calling

whats the best local model to use ?

Nothing yet. They're getting very close this month

I run 30 and 40B on my rtx 3090ti on ollama. Most of them run fine, as fast as chat gpt. Gonna need big improvements to go larger though even with 24gb vram. 72B is out of the question currently.

I started with a P40, then added a 3090. 48GB enough to run 70B models, but it might be time to add a 4090 as well.

Some said that this is a fine-tuned version of Llama, not the actual deepseek model.

That's correct. Tough to run a 400B+ model locally though

Not expecting to run 400B+ model on my laptop, but if I want to use Llama, I can. Whywhy do one fool others with this misleading name, then?