Most of the underlying machine learning principles and techniques underpinning Large Language Models like chatGPT are reasonably understood, and in one sense GPT-4 is just an advanced and proprietary implementation of those techniques, which are not closed source.
Training counts for A LOT in machine learning and
OpenAIs models have been trained and tuned on relatively massive data sets, many of which would be proprietary.
There's also a huge amount of tuning, optimisation, human feedback, decisions etc that have gone into the released products. However, there's nothing preventing competing models from being able to produce very similar functionality.
Interestingly, according to Elon Musk who was one of the initial founders “OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft".