Maybe this? I don't know how you can be sure your data remains confidential. Surely it needs to be decrypted for the model to read and process it, which if that's not done locally how can you ensure it's not intercepted. Being able to access it anonymously is good, but then you can't train it with your specific information and it would be hard not to identify yourself in the prompts. I need to look into this more.

nostr:nevent1qqs8g3g3vpu3wucmy8a04shkge8q53axnzfjrm2e737wf8dzsvc0z2gpzemhxue69uhhyetvv9ujuurjd9kkzmpwdejhgq3q8lzls4f6h46n43revlzvg6x06z8geww7uudhncfdttdtypduqnfsxpqqqqqqz0n4v33

Reply to this note

Please Login to reply.

Discussion

It's interesting for sure.

As far as I understand they allow you to run a bunch of opensource models in an allegedly private environment.

If your PC can handle the opensource models, you could also download them locally from https://ollama.com/search and just use ollama directly instead of a proxy.

If you can't meet the requirements for the gpt‑oss‑120b model, then you can use Maple Proxy and it will allegedly be more private than using ChatGPT directly.

It solves the privacy problem - and these models are open-source so they can't be nerfed or restricted.

You'll probably avoid answers like this one 😂

However, it doesn't solve the uncensoring part.

And these proxies are sort of like VPNs - probably half of them are controlled by the state and no one uses the other half - so if you have the means and value privacy, you should probably run the model locally.

For now, the Controllers probably aren't too concerned because for the vast majority of people - convenience > privacy.

Very few will pay for something they can access for free.