Yep. Being passive transfer is also a major feature.
Requiring two actively online/communicating clients is active/active and way more complex.
Ok, so here is a draft NIP for a Proof of Work Service Provider. All Feedback welcome!
Would love to hear from relay operators, open source relay contributors, client apps, Nostr users.
Basically, it allows for you to generate an event POW before signing as part of a relay membership or pre-paid credit system. Ultimately to succeed, Nostr apps would need to offer it.
I have a fully functional rust implementation I’ll likely open source (if enough interest), that just needs the payment integration code - and a bunch of testing.
https://gist.github.com/blakejakopovic/6c0ea718c0f956c461e9e8952d8c6533
It’s an option for anyone to use I guess for every note, or perhaps if relay are rejecting them with PoW minimum. If you’re not a member, it could be an ok way for relays to generate income - or pay costs.
In early testing you need at least 20 PoW to start to combat spam and automated flooding. That’s more than a laptop or mobile can do in a second. So around there may be the starting point.
Plus, if you PoW once, ideally your pubkey event now is accepted by any relay with a minimum PoW that’s same or lower.
#[6]
Well, alternatively I’m thinking in the next 6 months relays will be forced to start dropping older or less ‘valuable’ events.
Large database are not fun. And unless someone is paying you for storage, I’d expect the content to have an expiry anyway.
Has Musk taken control of the Twitter bots to use them against Nostr?
It may also be the image file size is too, so clients skip downloading it.
I spy the bike profile pic.
Yeah. It’s why I’ve built that PoW Service Provider. I’ll open source it this week. It’s not complete 100%, but good enough perhaps to whitelist pubkeys and test it.
It’s funny, I’ve mostly been messing with aggregation and spam stuff to help find performance weaknesses and address them - before the network 10x and then 10x again. I think 1000x, and we likely start seeing the network become islands, and relays become tiered into classes.
I’ve been writing too much SQL, all my rust comparisons have single equals 🫤
😄
The problem with me filtering stuff is I’m not sure what I don’t see anymore, but the network does. ha.
I do have an event rejection queue, but it’s limited in size.
I hate spending the time on it, spam is just wasting my time, but I’ve built some pretty solid detection across the board. Have a few more things in the works.
Literally just purged another 800k events from my db, by backtesting against new defences.
To be fair, that’s kind of the default. It’s extra work to process and remove deleted events. The code doesn’t write itself.
However, it was never intended as being any guarantee.. more like a please forget, and stop including in future requests.
I’d imagine it’s not very interesting content anyway - most client apps don’t even have a UI for it.
And major issue is technically any deletion event would need to live forever to even check against. Meaning both a spam vector and a growing database cost.
Seeing way more deletion event spikes being broadcast the past couple days. 48k in the past 6 hours - mostly over a 15 minute period.
I haven’t validated the deletes beyond being a valid json event.
May be a new delete everything tool, someone testing, spam.. not sure


Cashu in one line:
Cashu is a Chaumian ecash system and protocol for Bitcoin that gives users of custodial Bitcoin apps near-perfect privacy.
Read more: https://cashu.space/
I think transaction speed should be an important value prop to be sharing too. And zero failure rate? I think.
I’m just lowering my standards.. auto correct isn’t much better.
We decided it was best to trust politicians and government (read: legal corruption) with our life (read: singular period of existence).
They have only your personal best interest at heart, and seek only for you to live a happy and fulfilling life (under their personally prosperous rule, and at your expense).
The reason bitcoin can’t fail is oppression never lasts. It’s a shared idea now, regardless of technical implementations.
But also just Yin and yang.
That would make sense to me.
I’ve got a pretty large training set of 13k (with some dupes) spam events. You could filter labelled as spam, and perhaps hash the content into a set. Then maybe check membership?
I also have around 28k pubkeys flagged as spam I can share directly. You could review them and then delete their events.
Failing those, you could use the ML to get spam scores.. but it likely is more computational.
I’ve just purged around 2.8MM spam events. Some I can’t detect easy yet - like bogus reactions and reposts. I see them in network traffic. I just can’t do anything automatically.
Less frequent for sure, however unless you check occasionally they could change it - so you’d be holding stale cache.
The point around a pubkey being more trusted than DNS is valid. As long as people aren’t spoofed when they initially follow someone.
Well in a way it replicates out as relays sync or broadcast. It started mostly on one relay and toward the end it was a lot.
Correct after. Link didn’t copy. Doh.

