Yeah. This less-centralised stuff sure is compute heavy as it scales.
What’s your Mongo DB hosted on?
It’s a simple approach, but I’ve been using +1%/day target too.
Yep. I think it needs a simple enough client UI too. The easier to create and view/understand them the better.
This is what the older Tweetbot had for muting. Add more match types loke kind and it could be a pretty good start.

I think the ability for client apps to create custom views would be awesome.
Relay, content, kind or other sku based. Basically dynamic lists - likely with a mapping to a relay query behind the scenes. Then they could be shared easily too.
Update Damus App and top right there is an icon that lets you select less or specific (eg paid only) relays in the search/global/universal view.
It’s a good question.
What are the user feedback (experience) mechanism(s) where it’s noticeable? How can relays demonstrate value, both short and long term. Are three paid relays doing what I’m paying 5 for?
I see paid relays having portals that display stats, enable features and allow configurations.
I’ve likely got the data but haven’t read any newish NIPs, so I’m not tracking it yet.
I’ve seen a couple people with stats. It’s awesome to see ⚡️ in action.
Sadly neither :/ Otherwise I would jump in.
I’d guess Tweetbot used UIKit with a custom UITableController, but perhaps SwiftUI support it natively now. I tried to replicate it a while back with no joy.
I though it was ok for a real-time indicator. Question is should real-time auto-scroll? Ideally I think the line should be more like a cursor for latest seen post - all new will be above.
Is it possible to show the number of events loaded out of sight (above), where you don’t need to click show more (which scrolls to the absolute top) and you just scroll past the current latest marker to see new content?
Tweetbot did this and it works well. The new count was a smaller bubble test shown top right only when new content was above your current marker.
NostrGraph isn’t, however a few parts are. You can aggregate relay events using Nostcat. https://github.com/blakejakopovic/nostcat
The goal is to start a business. If that doesn’t work out, I’ll likely open source stuff.
NostrGraph is currently connected to around 200 relays and processing 6,300 events/minute (105/sec). Around 12.4MM (non-spam+valid) Nostr events total.
There’s about 3,000 events/minute that are valid (not spam or invalid). And around 250 unique (valid+deduplicated) events/minute.
An average of around 80 reactions, 75 notes, 50 contact list/relay updates, 30 metadata (no spam filtering) and 10 reposted events per minute.
And thanks to async rust, I’m about 50% 4x (shared) CPU usage with 8GB ram. Plus the server is doing other work too.

Yeah. I expected that model’s prediction to perform better using CPU. It can run better using GPU, however I’m avoiding the server cost for now.
I’ve written a bayes model trainer as well for that repo, but haven’t pushed the code yet. It’s pretty fast - maybe 100-200/req/sec on my laptop. I’ve been using gunicorn as well.
I’ll try push the update this week. I have some new training data I can likely push too.
nostr.watch daemons can soon be blocked via `robots.txt`
Blocking nostr.watch daemons will eventually result in an Uptime of 0% on the site, limited data availability on nostr.watch and will exclude your relay's data from global historical data, which has not yet been revealed.
Robots.txt will not affect-clientside checks, and your relay will still be listed on nostr.watch. Delistings of online relays is not currently supported.
If your relay is `wss://relay.com` then the robots.txt location would be `https://relay.com/robots.txt`
Using robots.txt is not exactly standard, but was easy to piggy-back on, it is temporary. The better solution would be an amendment to NIP-11 of some sort. Robots.txt parsing by nostr.watch daemons will be deprecated when there is a suitable alternative.
If your robots.txt is currently disallowing all User-agents, but you wish to allow nostr.watch, add:
User-agent: nostr.watch
Allow: /
Please be aware the daemons are getting more performant, optimized and polite with each passing day, they were pretty rude out of the gate, largely due to a feedback loop between two disparate bugs. Sorry about that.
Do you have a list of IPs you use? I’ve blocked a few IPs that seemed malfunctioning or malicious.
Another reason is something like Cloudflare where it resolves to a centralised Cloudflare DC and location is then not necessarily accurate for the server itself.
There is a relay that requires a pubkey to have a PoW minimum of N - I forget the threshold and url.
It’s difficult as I think PoW to $ calculations are hard. And then server ROI vs deplete phone battery ROI to generate a PoW gives asymmetry.
Which is why I liked PoW aggregation - sum of your PoW over time. But a spammer persistent enough can also build free PoW due to randomness or they can spend.
It may end up more of an indicator rather than a single value truth test.
Anyone able to somewhat track Nostr impact on lightning transaction growth? Maybe node runners? Maybe lightning address growth?
I know we took down WoS today. Maybe it’s small, but Nostr zaps could be random enough to be helping balance channels.. no idea. Maybe less random is better.. to counter strong channel bias - unsure if we can leverage something to help.
Maybe wallet to wallet payment success rates are impacted - good or bad?

