Avatar
Blake
b2dd40097e4d04b1a56fb3b65fc1d1aaf2929ad30fd842c74d68b9908744495b
#Bitcoin #Nostr #Freedom wss://relay.nostrgraph.net

This search filter NIP can help. Only a few relays I know of support it. https://github.com/nostr-protocol/nips/blob/master/50.md

I think clients should cache/index your own events better. My whole 3,500 event history is 3.5kB. Don’t even need to search a remote server. It’s also a poor man’s backup source then.

A definitely see long term retrieval services/relays existing.

I’m mostly ripping on how stupid twitter bios can be.

Where are the Nostr profile disclaimers saying “My zaps are not endorsements”?

As long as chat bots are called ‘chat bots’, I‘ll struggle to take them seriously.

I think the full solution should not trust relays about how much they are storing, and include a proof of storage/retrievability/data check.

I imagine clients can even keep a Merkel or some hash, and even store it as an event, linked to that relay. And a nonce check or similar can help a relay prove they are least storing those event ids. Maybe a way to expand to covering whole event too, without the client storing all the data.

One edge case is what if someone broadcasts your event. Who pays? And can someone broadcasting events on mass trigger relays to ask for more money from the pubkey, who didn’t broadcast or publish.

One last thought. Perhaps this is end game.

In a client app, when I add a relay as write, perhaps it sends a payment to that relay for 10MB. Then as you use data, maybe there is a way to monitor usage and pay for more - app automated (with budget) or with approval.

In a way, it could all be automated. Relays get paid as people add and use them, you pay more based on usage. You get redundancy controls. You drop a write relay and at some point your events get turned over.

My only other thought is because websockets messages are effectively capped in size, it’s possible to perhaps model off maybe a max event size being 2-4MB. You could also then get averages per kind or whatever, average event size overall, and model it that way.

However, it’s possible events could be bigger in future. And it’s possible p2p or WebTransport transport layers may increase the max size.

If you’d like some more stats, I can run them. Like avg event size per kind, or number of events per kind. Or average events per user seen at least three months ago.

When the creator added to many keys to his single frame protocol. 🙃

It’s a nice experience. Feels like it’s had a Japanese level of care taken to soften the edges. Like Ruby.

I think the more micro transaction style offers work, but mostly if you don’t trust the node or provider.

You’re basically saying, here is a small risk I’m taking, but it’s 10MB of data and $0.10 (or whatever).

Maybe it’s a way to scale out redundancy across relays. And perhaps you have a cheap and easy way to check they still hold those events. Maybe asking them to hash all of the event ids they should have with a nonce you provide, every so often.

My thoughts are, maybe both can work.

However, just like gmail, or Dropbox, everything has a fair use data cap.

Either way, somewhere, you will need a MB/GB limit per pubkey or whatever grouping you use.

Event count is hard because kind 7 and tiny and kind 3 are big. And future kinds may be even larger.

And if you have a MB/GB cap, if you host media, it can all be under the same limit.

You could also have a date cap (likely with size too), to persist for 3 months or whatever.

Your NDK may solve it. I haven’t taken a look yet.

I guess things like relay management, where it reconnects as needed. Or, maybe it has a relay pool of 5, and only connects as-hoc to publish or request a single event. Then WS close.

Somehow tracking per event per relay, if a publish was successful, relay said it was a dupe, timed out, errored or relay never connected.

Even using the window.nostr is great - but you can’t use it at first because it hasn’t loaded. So you need a way to test if it has loaded yet. UI can depend on it being found/loaded - and obviously calling functions too soon fails.

Maybe simple stuff too like remembering loaded relays for a user. If I refresh the page, why do I need to load from extension or kind 10002 so soon.

Things like maybe using shared state across tabs. We can share the relay connections, instead of new ones per tab - fairly sure this is possible.

And even things like if I provide a addr with relay suggestions, try query them first, then fallback to others. Or when I query for an event by id, can I kill/ignore all the other results that were too slow.

Maybe optional event signature verification for relay responses. Sometimes you want it sooner and sometimes perhaps delayed for performance for different processing.

Perhaps some optional pubkey cache.

That’s a start of some ideas anyway.

The goal of this app was basically to use Nostr browser extensions with a JSON editor to allow you to load, or create from scratch, an event, sign and publish it.

Using the command line was annoying, and I found weird serialisation issues when encoding JSON into the content field for the XKCD event conversion I worked on.

I was playing around with this prototype a few days ago. The JSON editor is simple, but nice to use.

I think this prototype is in hibernation for now… but I haven’t found any Nostr JS libraries that really solve the key repetitive utility functions I use often. Either too high or low level - I think there may be an opportunity for the gap.

1 sat = 1 sat - however I’m still surprised no one has use kind 300XX yet to broadcast BTC exchange rates.

Could be a cool way to query rates using a pubkey, kind and a d-tag identifier.

If the pubkey is trusted, you can be happy it’s valid, if you also check the created_at.

Being able to query across known/trusted pubkeys for rates could be awesome too - as you can hunt for the best deal.

Other real-time data could be interesting if shared like this too.

It’s possible to enable TOTP MFA for websites that support Nostr logins, as a secondary safety measure if your private key gets leaked/stolen.

I’d imagine it’s useful for more secure or financial services that benefit from additional authentication protection.

This is top 34 pubkeys event data, however it includes history kind 0 and 3 - and 3 is around 40% of the size typically.

Second image is by count.

Effectively 50MB would go a long way. This data is effectively from all time - since early 2022.

Replying to Avatar jack

💯

Private mutes too 🙂

If anyone has any Nostr Excel function feedback around queries that they may like to fetch or specific functions, let me know.

I have a functional Excel add-in working, it just needs some interface restructuring for better UX - but that needs to be guided by how people will actually use it.

Currently it can stream update a cell/s as new events come in (like watch a pubkey, or watch a content addressable event).

It can pull the latest N events for a query and dump them into rows and columns (for each key).

And who trapped Nostr in Excel?

#[0]