Yes. Are there truly uncensored models? (That are easy to access for normies like me…)

Reply to this note

Please Login to reply.

Discussion

It's pretty easy with a gpu and lmstudio. Download the models from hugging face. It's all point and click. Significantly less powerful but still useful.

Disconnect from the internet and then run them ensuring nothing leaks out.

this is most likely an overkill, I have not seen LLM server/client that collects telemetry yet

I would rather prefer to not use proprietary (closed-source) apps like LM Studio

waht about jan.ai?

this looks interesting, i'm installing it and pulling the llama 3.2.3 instruct Q8 model

probably not gonna spend a lot of time playing with it but maybe i can teach it to analyse my code and help me focus my refactorings to the messiest parts

Qwen2.5 Coder is really good for coding, you can even try to set it up in Cursor or something similar

Not overkill if you want assurance of something.

Also, why not via ollama?

I mean, you can always go extra mile if you want, but it doesn’t matter as much because they are offline by design

LM Studio was proposed in another comment, that’s why I mentioned it

In ollama I don’t like that you can’t choose your own quantization level/method (or I just haven’t figured it out). Nevertheless it’s a great server

it kinda depends on what are you looking for

you can try to look into abliterated models, I think they are the ones worth looking into

you can also take a look into Big-Tiger-Gemma2