Replying to Avatar ABH3PO

https://ollama.com/download

then "ollama pull codestral"

then "ollama serve"

download continue

https://github.com/continuedev/continue

you might have to configure continue with the right model names - Ctrl + shift + p then search continue models you should see a configure option

use ollama list to see all downloaded models.

RIP bandwidth for a bit. 😂

Reply to this note

Please Login to reply.

Discussion

You can save a lot of bandwidth later on by not querying hosted LLMs 😛

I'm impressed with the performance. Fwiw, on a M3 max with 48GB so maybe it's just the machine.

Results are very coherent and good so far. Now to try with VS code

inference is cheap. fine tuning is not that expensive either. NVIDIA GPUs have a field day with local LLMs.

Hmmm. Wondering if fine tuning this for bitcoin related code bases would be a good idea

Or u can use https://unleashed.chat I wonder if they let us download their weights.