It kind of works with Unicode (see https://www.joelonsoftware.com/2003/10/08/the-absolute-minimum-every-software-developer-absolutely-positively-must-know-about-unicode-and-character-sets-no-excuses/).

And compared to note text, hash and pubkey, the amount of bytes used for a kind doesn‘t really matter.

Reply to this note

Please Login to reply.

Discussion

You mean, solve it by allowing all characters?

Using numerals-only would reduce the incentive to claim ranges and offer a gazillion more combinations.

No defined range between U&A!iz6 and ++63%ddR

No, just allowing bigger numbers.

Maybe an example helps:

- 0..65534 uses 16bits

- 65535..2^32-2 uses 32 bits

- 2^32-1..2^64-2 uses 64bits

and so on. Small numbers use up less space than big ones. This is complicated for binary protocols, but not for nostr.

Oh, right.

I kind of like the range-discouragement inherit in random strings, but that's probably too wonky, for most developers.

😁 now I am at a loss.

But it's a thing, isn't it?

Without randomness, people get greedy and aren't encouraged to conserve. That's how the tag-as-counter effect creeps in.

And everyone clustering at particular combinations like 9999999 or 7000.

So many numbers, but people will still overlap because they will gravitate to the same subset or claim ranges.

Ha, never thought about that. So kind could also just be a name.

It's a name or descriptor that consists only of numerals.

There's no significance or purpose to the kinds being numbers. 🤷‍♀️