What clients could do is recognize these URLs at the time they're typed/pasted and prepend the "nostr:" prefix instead of forcing everybody to recognize them at render time.

Even the big platforms do it with URLs when you type just the domain name: they prepend the "https://" prefix (Twitter would famously add an "http://" prefix until recently and that would break some websites that didn't have an autoredirect).

It's common knowledge that we should try to put most of the work on the write flow, because it happens only once, while the read flow happens dozens, thousands, millions of times. That's what most scalable software end up doing.

Reply to this note

Please Login to reply.

Discussion

Bluesky also does this, as you type your rich-text left-wing opinion the client builds a parser-optimized metadata thing that gets published together with your post and it gets published as plaintext + this thing attached so it gets easy on the render side to display.

Here on Nostr we like to think we're smart but here we have some of our best client developers wanting to do more and more fancy parsing on the render flow, slowing everything down, and then publicly shaming other clients into doing the same.

We should probably copy the Bluesky approach if we manage to keep the plaintext plain and the rich metadata optional. It's not very different from the imeta approach that nostr:npub1q3sle0kvfsehgsuexttt3ugjd8xdklxfwwkh559wxckmzddywnws6cd26p pointed out.

I think that's fine, but if we want the barrier to entry to be low, parsing on user input is just as much of a chore as parsing for display

Parsing on input keeps the network sane and helps everybody. You do it or not do it, it's your client's decision, self-contained and virtuous.

Parsing on on display is Postel's law and trying to fix people's mess while incentivizing them to do more and more mess until you can't fix it all. If you do it you create work for everybody else and protocol bloat.