It's interesting for sure.
As far as I understand they allow you to run a bunch of opensource models in an allegedly private environment.
If your PC can handle the opensource models, you could also download them locally from https://ollama.com/search and just use ollama directly instead of a proxy.
If you can't meet the requirements for the gpt‑oss‑120b model, then you can use Maple Proxy and it will allegedly be more private than using ChatGPT directly.
It solves the privacy problem - and these models are open-source so they can't be nerfed or restricted.
You'll probably avoid answers like this one 😂

However, it doesn't solve the uncensoring part.
And these proxies are sort of like VPNs - probably half of them are controlled by the state and no one uses the other half - so if you have the means and value privacy, you should probably run the model locally.
For now, the Controllers probably aren't too concerned because for the vast majority of people - convenience > privacy.
Very few will pay for something they can access for free.