this is most likely an overkill, I have not seen LLM server/client that collects telemetry yet
I would rather prefer to not use proprietary (closed-source) apps like LM Studio
this is most likely an overkill, I have not seen LLM server/client that collects telemetry yet
I would rather prefer to not use proprietary (closed-source) apps like LM Studio
waht about jan.ai?
this looks interesting, i'm installing it and pulling the llama 3.2.3 instruct Q8 model
probably not gonna spend a lot of time playing with it but maybe i can teach it to analyse my code and help me focus my refactorings to the messiest parts
Not overkill if you want assurance of something.
Also, why not via ollama?
I mean, you can always go extra mile if you want, but it doesn’t matter as much because they are offline by design
LM Studio was proposed in another comment, that’s why I mentioned it
In ollama I don’t like that you can’t choose your own quantization level/method (or I just haven’t figured it out). Nevertheless it’s a great server