They call it a “new trick” but this essentially just comes down to restricting AI model releases to closed and censored versions. https://www.wired.com/story/center-for-ai-safety-open-source-llm-safeguards/

Reply to this note

Please Login to reply.

Discussion

The TLDR is that they basically lock down certain parameters to prevent them from being fine tuned. You might call the resulting model “partially open weight”. It also reduces overall accuracy.

Essentially, a method to neuter LLM's as a result of censorship. Uncensored versions be damned, I guess.

Yeah. It basically turns an open-weight model into a “semi open-weight” model where some weights are locked down against training. I’m certain there will eventually be regulatory pressure to add these limits to open models. I wouldn’t consider them open anymore though, since they would have limits on what can be fine tuned.