Avatar
semisol
52b4a076bcbbbdc3a1aefa3735816cf74993b1b8db202b01c883c58be7fad8bd
👨‍💻 software developer 🔒 secure element firmware dev 📨 nostr.land relay all opinions are my own.

Took you months to join 😅

why does this reas like AI-generated

also, it lacks basic understanding of the nostr protocol, where there is no order or consistency nostr:note1urafgtdztawpdce58s4wcv4f0tyl6m4y0rxcrjp9qyqxj9m0g8ashpatlc

also you could easily perform a DoS attack on certain events by creating a lot of events with a higher event ID (or lower depending on the relay impl) and the same created at as the target event

you reach your limit before the target event gets returned and there is no way to find it except by id

this will also become a problem once nostr has a high event volume per second

the only way to fix this is to allow support for proper pagination

a flurry of discussions and things relating to my work lately have led me to something

there is two current existing query types in nostr, the REQ and the COUNT

REQ has no concrete notion of signalling how many events there are in a request beyond what it has been hard coded to limit results to

COUNT doesn't have metadata to easily determine other than making multiple COUNT queries

"since" and "until" fields in filters can be used to create a boundary that limits the number of results from a REQ but it is inadequate

i could swear i already made this suggestion before

but i'm gonna make it again, if i did, or for the first time if not

there should be a query that just spits back a list of all the event IDs that match a query, and if you set no limit, it just returns the whole set

if you consider that some follow events take as much as 512kb of data, and this is often a common size limit for individual events, then this is good for somewhere around 14000 individual event IDs to be returned in a result, it could be as simple as an array, so overhead is ["",...]

perhaps this is not sufficient though, maybe you want to include the timestamp next to each event ID... or maybe you could make it so you define the first timestamp of the first event ID, and then after that it's seconds offsets from the previous, this would mean that the list would be something like

[[12345678,""],[1234,""],[2562,""], ... ]

i'm inclined to say that fuck the datestamps, i'm just gonna make a new req variant that returns the IDs instead of the results as an array, and to keep with the style, it will just be

["REQID","subscriptionID","","", ... ]

the relay already can specify some size limit according to the nip-11 relay information, so it can just stop at just before that size limit and the user can query for that event and get a timestamp "since" to use to get the rest

nostr:npub1ntlexmuxr9q3cp5ju9xh4t6fu3w0myyw32w84lfuw2nyhgxu407qf0m38t what do you think about this idea?

if the query has sufficient reasonable bounds, like, it is very unlikely you want more than 14000 events over a period of let's say, the last day, of a specific kind, and certainly not if you limit it to some set of npubs

but you would still know where the results end, and so long as you stick to the invariant of "this is what i have on hand right now" the question of propagating queries can be capped by the answer of "what i have" and it is implementation internal whether or not you have a second layer and if you then go and cache the results of that query so next time you can send a more complete list

and i am not even considering this option

what about if instead of returning the results encoded in hex (50% versus binary hash size) but instead send them as base64 encoded versions of the event IDs, that gives you 75% or in other words expands the hypothetical max results of just IDs from 14000 to 21000

ws maximum message size is a problem so you should use multiple messages

it can be pretty low depending on the client

blossom imo as a protocol is garbage, as it tries to consolidate management (upload/delete/list) with retrieval of blobs

it is a big pain in the ass for scaling, look at any service and you will see cdn domains are separate from upload

blossom also makes no attempt to allow media optimization, and I believe it is an acceptable tradeoff to sacrifice integrity for reduced data usage if you can turn it off as needed

blobs should be identified by nostr event IDs, meaning you get metadata for free, and if a user wants their blob gone, they can issue a delete event and send it to all hosts

rehosting content becomes an explicit action

What I am referring to here is on the server side.

there are only 3 operations that matter

- write

- read

- delete

usually, the write operation will never issue the same key twice to different content (after a delete)

deletes are eventually consistent

Files can be easily cached as the only operation is a key read, and can be served from high-throughput high-storage servers

Events cannot be easily cached

And then there are clients that shall not be named that don’t even recognize the existence any emoji reactions even if they don’t show it. nostr:note154vdznl8tygtp06z3ga2szzc8v589k87lzya0q4d5g6jdchjdh3qgzds8z

best option is to exchange your IoU tokens for real sats while you can over LN

nostr:nevent1qqstyvz9pjms5c9xac4nju26d2c7pw0ppufvt3xdng2ws8829mpne7c0tceul

33% of the OpenSats board has affilitation with companies known to actively damage open source initiatives. nostr:note19hn8jlyvlwwmm85059wfu9wuyqzl2kpn8zmsg9685szamvdcpe2sgskvt9

Proof of work does not create trust.

It is a mechanism that reduces an infinite amount of states into 1 “best” one. That is it.