The TLDR is that they basically lock down certain parameters to prevent them from being fine tuned. You might call the resulting model “partially open weight”. It also reduces overall accuracy.


They call it a “new trick” but this essentially just comes down to restricting AI model releases to closed and censored versions. https://www.wired.com/story/center-for-ai-safety-open-source-llm-safeguards/
The TLDR is that they basically lock down certain parameters to prevent them from being fine tuned. You might call the resulting model “partially open weight”. It also reduces overall accuracy.


No replies yet.