When you use openAI or other models through us, you can do so without connecting personally identifiable information like email addresses, credit cards, or the like.

Access to the best models

While maintaining great privacy

Reply to this note

Please Login to reply.

Discussion

Mf won't say they can be used only because they are willing to hand them over

congrats on your service, i use it all the time, even cancelled my openai subscription.

#ai and #privacy empowered by #bitcoin lightning ⚡️

nostr:nevent1qqsfwj7u50r37srtza9gvy96dw5nkk49tz5ej3g3kqjhu2esc87gxhgdc2r4j

Happy user here. Love being able to buy credit in sats and the possibility to regenerate responses with various models. Keep up the good work. 👍

I've been enjoying your service. Am I correct that even though we don't have to give identifiable info to you, we still have to be careful about what we reveal in the chat because companies like openAI could be collecting it and profiling the individual users?

It is possible they could do this, yes, but would be very difficult.

From the perspective of openAI, all of our user's queries are bunched together as one super user "PPQ" which is our business account. But if individual users are sharing PII in the chat then technically they (openAI) could analyze the content and attempt to tie things together.

But even if a user said "I am john smith" in their queries, I don't think it could ever be brought to court in that way since anyone can say "I am John Smith" in a chat. It doesn't PROVE that the person who queried is actually that person since the credit card or email is never connected.

Testing... Just WOW

If you can help law enforcement, you’re obligated to. They still log everything. You still log everything. I’m sure y’all have advantages that make you worth buying but miss me with this privacy silliness.

We don't log, but it's just a promise from us and we aren't currently proving it. There are architectures could allow us to prove such, but currently we are just trying to keep up all the new AI stuff.

To argue that our current model is no different than ChatGPT in terms of practical privacy is very wrong though.

If law enforcement comes to us the most we can give them is user agent strings of our users. That is much different than ChatGPT which can provide credit card details, email addresses, etc to link chats with actual people.

Duck.ai

Duck AI is cool but not the best models afaik.

For occasional answers it's very suitable, for more complex tasks indeed it isn't the best choice

incorrect.

if personal data is included in the prompt, openai will still have access to it. this cannot be prevented, the query goes to their server at some point they can always save it, and they can always be forced to present it in court

nostr:nevent1qvzqqqqqqypzp53tekcay5zmcps0vk5xe40jq5ewche7g8qx4657mtpe76a8dltwqy88wumn8ghj7mn0wvhxcmmv9uq32amnwvaz7tmwdaehgu3wdau8gu3wv3jhvtcpz9mhxue69uhkummnw3ezuamfdejj7qpqja9aeg78raqxk962scgt56af8dd22k9fn9z3rvp90c4nps0usdwsdgj52r

Isn't your point obvious? The queries are obviously go through to openAI and we aren't trying to say otherwise. The point we are making is that at least when you use us your PII isn't directly and incontrovertibly connected to your queries.

We never said we are offering perfect privacy. We said we are offering comparatively better privacy than using ChatGPT with a connected credit card. Do you disagree?

I may have come off a bit strong, I think you're offering better privacy than using openai & co directly, and I think your service is great and worth paying for.

But I still believe that your service is still not something that should be used for anything personal or anything you wouldn't want to get out, if you HAVE to use LLMs for anything personal, the only real solution is local LLMs running on your own machine.

Naturally the better solution is no LLMs, but that's just my personal opinion

Agree that some information is too sacred to give even to us. It's hard to spin up local LLM's for most unfortunately. Models with trustless execution environments are a possible great fit of convenience and privacy like nostr:nprofile1qyjhwumn8ghj7en9v4j8xtnwdaehgu3wvfskuep0dakku62ltamx2mn5w4ex2ucpxpmhxue69uhkjarrdpuj6em0d3jx2mnjdajz6en4wf3k7mn5dphhq6rpva6hxtnnvdshyctz9e5k6tcqyp7u8zl8y8yfa87nstgj2405t2shal4rez0fzvxgrseq7k60gsrx6zeuh5t .

Chatgpt imagine I have a 75kg chicken that is 1.78 metres tall, how would I get rid of the carcass easily?

😂

Chats are personally identifiable info, so not connect specific individually labeled PII like what you list isn’t particularly impressive. Now if you stuff the AI models full if other chats from the same user automatically with contradictory and obfuscated info without impacting the user’s ability to effectively use AI, that would be something, but we all know that’s not happening.