incorrect.

if personal data is included in the prompt, openai will still have access to it. this cannot be prevented, the query goes to their server at some point they can always save it, and they can always be forced to present it in court

nostr:nevent1qvzqqqqqqypzp53tekcay5zmcps0vk5xe40jq5ewche7g8qx4657mtpe76a8dltwqy88wumn8ghj7mn0wvhxcmmv9uq32amnwvaz7tmwdaehgu3wdau8gu3wv3jhvtcpz9mhxue69uhkummnw3ezuamfdejj7qpqja9aeg78raqxk962scgt56af8dd22k9fn9z3rvp90c4nps0usdwsdgj52r

Reply to this note

Please Login to reply.

Discussion

Isn't your point obvious? The queries are obviously go through to openAI and we aren't trying to say otherwise. The point we are making is that at least when you use us your PII isn't directly and incontrovertibly connected to your queries.

We never said we are offering perfect privacy. We said we are offering comparatively better privacy than using ChatGPT with a connected credit card. Do you disagree?

I may have come off a bit strong, I think you're offering better privacy than using openai & co directly, and I think your service is great and worth paying for.

But I still believe that your service is still not something that should be used for anything personal or anything you wouldn't want to get out, if you HAVE to use LLMs for anything personal, the only real solution is local LLMs running on your own machine.

Naturally the better solution is no LLMs, but that's just my personal opinion

Agree that some information is too sacred to give even to us. It's hard to spin up local LLM's for most unfortunately. Models with trustless execution environments are a possible great fit of convenience and privacy like nostr:nprofile1qyjhwumn8ghj7en9v4j8xtnwdaehgu3wvfskuep0dakku62ltamx2mn5w4ex2ucpxpmhxue69uhkjarrdpuj6em0d3jx2mnjdajz6en4wf3k7mn5dphhq6rpva6hxtnnvdshyctz9e5k6tcqyp7u8zl8y8yfa87nstgj2405t2shal4rez0fzvxgrseq7k60gsrx6zeuh5t .