Replying to Avatar Anita

# Privacy AI Is Possible

Because of privacy concerns, I have been reluctant to use LLMs. I started experimenting in 2023 because I realized this is going to come either way - and I can make use of it, or be left behind.

## Power

I started with ChatGPT, as everyone does, but soon stopped using it for political and privacy reasons. Sam Altman, founder of OpenAI, is also a co-founder of the Worldcoin project. Worldcoin is a cryptocurrency that requires individuals to scan their iris to identify themselves. They have been rolling out their product aggressively on the African continent by paying anyone who signed up $25 in exchange for their biometric data. In Kenya, Worldcoin became the subject of a 2025 court case and was rightfully instructed to delete all data.

I don’t want to share my data with companies showing no regard for dignity and privacy, and taking advantage of unequal bargaining situations.

So no ChatGPT for me.

## Agency

Claude was the first tool I used on a regular basis. Since I understood that if you use the free plan on ChatGPT your conversations might end up in Google Search - and this might be the same with other models - I decided to subscribe to a paid plan on Claude.

I am using it with a nym (a fake name and email), but of course my payment data is still associated with my account. That’s why I was looking for more private options.

The point for me is simple: I want to use AI, but I want to choose the terms. I don’t want “convenience” to mean “total surveillance.”

## Tools

### PayPerQ offers Bitcoin payments

nostr:npub16g4umvwj2pduqc8kt2rv6heq2vhvtulyrsr2a20d4suldwnkl4hquekv4h allows you to pay with Lightning Bitcoin, which increases your privacy because your real name is not associated with your searches. It offers a variety of LLMs for chat, image, video, audio, and DeepResearch, which makes it easy to experiment. At the same time, it increases the number of my experiments, because I want to know what different models produce and what is best.

I think it is essential to find out which tools are the right ones for your needs. Honestly, I haven’t found mine yet.

I like Claude Sonnet 4.5 for editing texts. DeepResearch is incredible for doing what its name says, although the depth of results can be overwhelming. Z.AI: GLM 4.7 was great for strategic thinking, but then it failed my expectations in text editing.

PayPerQ hides your identity in the purchasing process, but your prompts and conversations still land at the companies behind the models. I am not against them learning what I ask or the corrections I make - AI makes a lot of mistakes and it has a lot to learn from us. I actually want LLMs to crawl my work, but I don’t want them to save every little thing I do and mix it up with my private questions.

### Maple AI: privacy from sign-up to LLMs

nostr:npub10hpcheepez0fl5uz6yj4taz659l0ag7gn6gnpjquxg84kn6yqeksxkdxkr is the best solution I found. It runs on open source code and open models. It says it never uses your data to train AI, doesn’t log your chats, does zero data retention, and you can pay with Bitcoin. It offers many models (including OpenAI GPT-OSS — yes, OpenAI, but in a private way).

Maple AI states that communications are encrypted locally on your device before being transmitted, that their servers can’t read your data, and that even during processing the pipeline is designed with privacy as the priority.

I want AI as a tool, not as a trap.

#Daily #AI

The current AI development is dominated by actors who can afford scaling, locking innovation behind capital, infrastructure, and centralized power. This leaves little room for individuals and communities who want to build competent models but cannot compete with Wall Street’s “scale is all you need” doctrine.

What we need are decentralized AI systems that are built collectively, owned collectively, and designed from the ground up to ensure user privacy.

Both the model architecture and the training data should be fully transparent, while the model weights could be monetized to reward contributors.

This creates a transparent, community-driven free-market ecosystem, where users decide which projects to fund and support, aligning incentives with innovation.

Reply to this note

Please Login to reply.

Discussion

I like this point of view.

Perhaps this is a really uninformed statement;

But aren’t people already able to invest/donate to the companies creating open source models like deepseek?

Yes, it is possible. The type of organization developing the project may vary. Companies such as DeepSeek, Meta, and similar ones provide the model architecture and weights, ready for use.

That said, the process is not fully transparent: as an investor in this context, you do not know which data the model was trained on, nor for what purpose. Were training prompts used to enforce certain behaviors? To bias the model against specific ideals?

This should be auditable in my opinion. Then, If fine tuning with private data is required afterward, each user providing this data should have their own secure copy of the model.

I see where you are going. I agree with this.

The problem I see is these companies are in the end money-driven even if they make models open.

They aren’t as free to make everything public because they are tied to their monetary vision. While as the people who would not have a problem don’t have the resources to create a model in the first place.

New models should be developed by small groups, but the problem is how do they start with little to no money?