Something like this, everyone is paying according to their Relay usage 🤔

Something like this, everyone is paying according to their Relay usage 🤔

Hm, not sure normal pleb would know how much storage they need. Would be good to charge per note. Maybe like 10 millisat or so? 🐶🐾🫡
Maybe DM alerts with link? Not sure tho 😂😂
That could be one trick, or allow certain overage with alert in the post or DM. 🐶🐾🫡
This is top 34 pubkeys event data, however it includes history kind 0 and 3 - and 3 is around 40% of the size typically.
Second image is by count.
Effectively 50MB would go a long way. This data is effectively from all time - since early 2022.


Is it a good idea to have it as storage based rather than subscription-based?
My thoughts are, maybe both can work.
However, just like gmail, or Dropbox, everything has a fair use data cap.
Either way, somewhere, you will need a MB/GB limit per pubkey or whatever grouping you use.
Event count is hard because kind 7 and tiny and kind 3 are big. And future kinds may be even larger.
And if you have a MB/GB cap, if you host media, it can all be under the same limit.
You could also have a date cap (likely with size too), to persist for 3 months or whatever.
I think the more micro transaction style offers work, but mostly if you don’t trust the node or provider.
You’re basically saying, here is a small risk I’m taking, but it’s 10MB of data and $0.10 (or whatever).
Maybe it’s a way to scale out redundancy across relays. And perhaps you have a cheap and easy way to check they still hold those events. Maybe asking them to hash all of the event ids they should have with a nonce you provide, every so often.
Thank you so much, It was very helpful. I will give this model a try and see how it goes.
My only other thought is because websockets messages are effectively capped in size, it’s possible to perhaps model off maybe a max event size being 2-4MB. You could also then get averages per kind or whatever, average event size overall, and model it that way.
However, it’s possible events could be bigger in future. And it’s possible p2p or WebTransport transport layers may increase the max size.
If you’d like some more stats, I can run them. Like avg event size per kind, or number of events per kind. Or average events per user seen at least three months ago.
One last thought. Perhaps this is end game.
In a client app, when I add a relay as write, perhaps it sends a payment to that relay for 10MB. Then as you use data, maybe there is a way to monitor usage and pay for more - app automated (with budget) or with approval.
In a way, it could all be automated. Relays get paid as people add and use them, you pay more based on usage. You get redundancy controls. You drop a write relay and at some point your events get turned over.
That's exactly why I want to test this approach. If there's a way for users to monitor their usage and for payment to be automated when they need more storage, the whole process can be seamless. I think this may take some time, but I can see it happening in the future.
In theory, I could monitor their usage and create some kind of automation whereby, if users don't have enough storage, they would receive an alert in their DMs along with a payment link.
I have one working we relay which is using this exact model but it's just for testing right now.
I think the full solution should not trust relays about how much they are storing, and include a proof of storage/retrievability/data check.
I imagine clients can even keep a Merkel or some hash, and even store it as an event, linked to that relay. And a nonce check or similar can help a relay prove they are least storing those event ids. Maybe a way to expand to covering whole event too, without the client storing all the data.
One edge case is what if someone broadcasts your event. Who pays? And can someone broadcasting events on mass trigger relays to ask for more money from the pubkey, who didn’t broadcast or publish.