This sounds like cope.
Possibly they fine tuned DeepSeek on an OpenAI model (cheaper than using humans), but it makes no sense to primarily do this when self-supervised learning and RL is much more efficient. Also, DeepSeek performs better than OpenAI on several benchmarks - you can't achieve this purely by distilling a teacher model.
Likely they made a technical breakthrough and USA "AI tzar" is seething.