Relays have been behaving weirdly for a few days now.
I use two main desktop clients nostr:nprofile1qythwumn8ghj7mn0wd68ytnsv9ex2ar09e6x7amwqyv8wumn8ghj7urjv4kkjatd9ec8y6tdv9kzumn9wsqzq5edsvxllcyuz0n4azc5tjp9wx8uz2cqq0mp6c0fqamjr3llly7tksuz3y and nostr:nprofile1qy88wumn8ghj7mn0wvhxcmmv9uq3qamnwvaz7tmwdaehgu3wd4hk6tcqyr6whrnz4hgngzuu4hxesc0xdxewjp7w556wpaln4jt5cyw8tzj35qj25jp I see some posts on one platform and some posts on another.
This note appears on Primal, but I cant reply, repost or like and it isn't showing up at all on Jumble (or iPhone nostr:nprofile1qy2hwumn8ghj7etyv4hzumn0wd68ytnvv9hxgqgdwaehxw309ahx7uewd3hkcqpq8m76awca3y37hkvuneavuw6pjj4525fw90necxmadrvjg0sdy6qsmthtls )
So to @npub1r6cdfl0z2zeg5nc0txttmfxxxw98k6quyckgc4zqh5zhd49hnwrs75gflm
Apologies, the reply, I've been trying to post is:
"This is all new to me. I’m learning as I go along trying to build Brian, my replacement brain."
It’s weird because same happens to me. In primal everything works but in Iris I cannot even see my reply!
Maybe I did something wrong? I’m new in nostr :(
I really hope this turns out to be true. I’m opposed to the idea that “scale is all you need”, rather, I believe that “innovation / research are all you need.”
The concern I have is that the scaling strategy can still be applied to multi-pass models, which would likely outperform smaller ones. This not only increases training costs, but also makes inference more expensive due to the need for multiple actions.
That said, I’m not very familiar with these types of architectures, so I’d be happy to read any material you’d recommend.
For me, this is all fine so far. The problem is that, on the one hand, we cannot access the probabilities (or logits) associated with the chosen tokens in non open-source models. A model may be uncertain about an answer and still produce an incorrect response.
It is crucial to have access to the level of confidence behind a model’s answers. Somehow, the uncertainty associated with an output needs to be quantified, and the user should be made aware of it.
I will be always up for a discussion ;)
I just eradicated 42 Nostr zombies using #PlebsVsZombies! ⚔️🧟♀️🧟♀️
My Zombie Score™ was 14%! What's yours?
🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟪🟩🟩
Follow nostr:npub1pvz2c9z4pau26xdwfya24d0qhn6ne8zp9vwjuyxw629wkj9vh5lsrrsd4h and join the hunt at: 🏹
# Privacy AI Is Possible
Because of privacy concerns, I have been reluctant to use LLMs. I started experimenting in 2023 because I realized this is going to come either way - and I can make use of it, or be left behind.
## Power
I started with ChatGPT, as everyone does, but soon stopped using it for political and privacy reasons. Sam Altman, founder of OpenAI, is also a co-founder of the Worldcoin project. Worldcoin is a cryptocurrency that requires individuals to scan their iris to identify themselves. They have been rolling out their product aggressively on the African continent by paying anyone who signed up $25 in exchange for their biometric data. In Kenya, Worldcoin became the subject of a 2025 court case and was rightfully instructed to delete all data.
I don’t want to share my data with companies showing no regard for dignity and privacy, and taking advantage of unequal bargaining situations.
So no ChatGPT for me.
## Agency
Claude was the first tool I used on a regular basis. Since I understood that if you use the free plan on ChatGPT your conversations might end up in Google Search - and this might be the same with other models - I decided to subscribe to a paid plan on Claude.
I am using it with a nym (a fake name and email), but of course my payment data is still associated with my account. That’s why I was looking for more private options.
The point for me is simple: I want to use AI, but I want to choose the terms. I don’t want “convenience” to mean “total surveillance.”
## Tools
### PayPerQ offers Bitcoin payments
nostr:npub16g4umvwj2pduqc8kt2rv6heq2vhvtulyrsr2a20d4suldwnkl4hquekv4h allows you to pay with Lightning Bitcoin, which increases your privacy because your real name is not associated with your searches. It offers a variety of LLMs for chat, image, video, audio, and DeepResearch, which makes it easy to experiment. At the same time, it increases the number of my experiments, because I want to know what different models produce and what is best.
I think it is essential to find out which tools are the right ones for your needs. Honestly, I haven’t found mine yet.
I like Claude Sonnet 4.5 for editing texts. DeepResearch is incredible for doing what its name says, although the depth of results can be overwhelming. Z.AI: GLM 4.7 was great for strategic thinking, but then it failed my expectations in text editing.
PayPerQ hides your identity in the purchasing process, but your prompts and conversations still land at the companies behind the models. I am not against them learning what I ask or the corrections I make - AI makes a lot of mistakes and it has a lot to learn from us. I actually want LLMs to crawl my work, but I don’t want them to save every little thing I do and mix it up with my private questions.
### Maple AI: privacy from sign-up to LLMs
nostr:npub10hpcheepez0fl5uz6yj4taz659l0ag7gn6gnpjquxg84kn6yqeksxkdxkr is the best solution I found. It runs on open source code and open models. It says it never uses your data to train AI, doesn’t log your chats, does zero data retention, and you can pay with Bitcoin. It offers many models (including OpenAI GPT-OSS — yes, OpenAI, but in a private way).
Maple AI states that communications are encrypted locally on your device before being transmitted, that their servers can’t read your data, and that even during processing the pipeline is designed with privacy as the priority.
I want AI as a tool, not as a trap.
#Daily #AI
The current AI development is dominated by actors who can afford scaling, locking innovation behind capital, infrastructure, and centralized power. This leaves little room for individuals and communities who want to build competent models but cannot compete with Wall Street’s “scale is all you need” doctrine.
What we need are decentralized AI systems that are built collectively, owned collectively, and designed from the ground up to ensure user privacy.
Both the model architecture and the training data should be fully transparent, while the model weights could be monetized to reward contributors.
This creates a transparent, community-driven free-market ecosystem, where users decide which projects to fund and support, aligning incentives with innovation.
I'm new here. Just want to say hi to the community!👋
An efficient, decentralized learning protocol is the next step
