Some client sent me REQ that filtered hundreds of authors by short prefix resulting into 180KB SQL generated, wasn’t it this tool?
Building a tool to check speed of relays. It sends complex queries and downloads several thousands of notes and reports the time it needed. It usually completes in a few seconds. So not much burden for relays!
If relay operators want I can include them in my tests and share the results. If relay operators don't want to be tested, also let me know.
For tests to work the relay has to be compatible with this library that I use (most are): https://github.com/holgern/pynostr
The relay should not have traffic shaping (throttling) active because it will make the communications slower.
Btw I thought my relays were fast, but they ranked 7th and 9th :(
Discussion
Mine doesn't do hundreds. What is short prefix?
Mine doesn't do hundreds. What is short prefix?
Its
pubkey LIKE '%abc’
Instead of
pubkey = ‘full-64-char-long-author-pubkey’
According to NIP-01
(And of course searching by prefixes is more expensive)
I can see some implementations don’t support it and some act inconsistently
For example at nos.lol the dollowing works:
["REQ", "test", {"authors": ["000000000332c7831d9c5a99f183afc2813a6f69a16edda7f6fc0ed8110566e6"]}]
As well as
["REQ", "test", {"authors": ["000000000332c7831d9c5a99f183afc2813a6f69a1"]}]
But next
["REQ", "test", {"authors": ["000000000332c7831d9c5a99f183afc2813a6f6"]}]
Produces error:
["NOTICE","ERROR: bad req: uneven size input to from_hex"]
* of course its
pubkey LIKE ‘abc%’