You know, the cloud hosted official deepseek was pretty fast up until their R1 model caused the tech stock market to crash and made international news… Thankfully tomorrow my new 64gb ram / 20 gpu M4pro mac mini arrives so i’ll be able to just run it locally.
Discussion
Please train another one without all the censorship so we could ask some, you know funny question 😂🫣🤣
I suspect the censorship stuff is just a RAG and i bet within a few days somebody will have released a version with that undone.
And to be fair, every released version of an LLM model weights file by american companies has LOTS of censorship too… it’s jut not about things the communist party of china cares about.
🤔
Good point
Though not really sure about rag
I think the process raw data according to the requirements from China for all llm company
It shows the censored information, sometimes, and then redacts. Makes it look like it was added later.
The AI is learning, so maybe it will need to be steadily recensored, or something.
It seems that someone has already jailbreaked it:
Yeah, every country has a censorship rulebook, and the AI holds differing opinions, depending upon who trained it.
They were overrun by people logging in to try it out.
https://x.com/carrigmat/status/1884244369907278106
This guy recommends 768GB for running it locally
Hugging Face for uncensored version.
Different HW, similar specs... You will not be able to get the Full Precision model running on it, only the quantizationed. For the full model they used a cluster of 8× Ma4 Pros.
based
what forbidden prompts will you try?