Maybe no one can finish an app because web sockets suck and the relay model doesn’t work? 🤷‍♂️

Discuss.

Reply to this note

Please Login to reply.

Discussion

the solution is caching, and the relay model is awesome!

You build apps?

bad ones 😝

Is there because of relays and caching? 🤣

Maybe you’re a great dev on dbs

Not really. The app I'm building on Nostr is my first "real" app, everything else has been crappy ruby scripts and shell scripts to simplify some parts of my work

Web sockets don’t scale?

I think the real reasons are different

What are they? 😉

Websockets have limitations. Most pages only need one or two connections. Pulling down data from many multiple relays via websockets is, IMO, less than ideal and requires way too much computational overhead. This needs to change as TNP (the nostr protocol) matures. We don't need websockets. We won't need dns. Nostr can and should cut out all extraneous, legacy layers for the simplest and most versatile means of notes and other stuff transmitted by relays.

i'm working on the HTTP API right now. i've decided i'm not happy with the way that my previous implementation wasn't exactly compliant with JSON Schema so i'm adding some new codecs and annotations so the docs API renders precisely what the inputs and outputs are validated against. it's a bit of work but it will be worth it i think.

by the way, the nostr schema for filters especially is difficult to implement correctly in #golang because of its nonstandard structure. you can't simply define a standard Go struct to cover, specifically, those #a etc tags, which are omitempty and i implemented them as a simple array with the first field being the key and the rest being the other elements of the arrray.

fiatjaf used some custom JSON code generator library for go-nostr but it's retardedly inefficient and slow compared to the code i wrote for handling this stuff.

as i see it, one of the key goals that nostr has is to replace the nonuniform, bespoke real time messaging protocols used in apps like MeisterTask and Figma, that allow collaborative workspaces. for this use case, it needs to be as efficient as possible because latency matters a lot.

Can you share the link to this nip ?

they would never approve one from me, i've publicly expressed my disdain for most of the people involved in the approval process.

doesn't matter anyway, i'm using an OpenAPI library called huma that generates a full and complete specification including (currently working on this part) JSON Schema that precisely specifies the types, key names and documentation for every parameter and result that comes from it.

so once i've completed what i consider to be the base spec (it will replicate all necessary functions, only some small differences like clients only need to open one subscription channel and there will be an endpoint to close specific subscriptions) you will be able to see it all at https://realy.mleku.dev/api and you can already look at the basic stuff that's already now done, but it's a work in progress, it will be more complete soon.

it's taking me a bit longer to do because being compliant with full openapi is actually a bit more complicated, in some ways, than the websocket json api of nip-01.

once it's complete and i have it debugged, anyone who wants to could write one by taking the openapi json/yaml spec, generating all of the handlers and putting their implementation behind it.

it's better than a nip. absolutely clear and unambiguous, and machine readable.

How do you do it without web sockets?

That's a question for someone smarter than I.

But... You can't use nostr directly on lowpower SoCs or micros because of the overhead needed to run websockets. Heck, most e-paper e-readers can't handle websockets very well or at all.

So... Something else needs to replace them to lower the bar of entry to just above smoke signals.

Oh!

NDK blows. That's one reason why it's hard to get apps working. Once there are properly thought out and implemented DKs, this will facilitate having less friction for novice and veteran builders.

that has been my experience as well.

NDK and nostr-tools are very hard to use.

Tried nostr:npub1ye5ptcxfyyxl5vjvdjar2ua3f0hynkjzpx552mu5snj3qmx5pzjscpknpr' s applesauce, and its beautiful

Decentralising stuffs is inherently vastly harder than centralised counterpart. But I think Nostr will work, with websockets and very limited changes to the protocol.

The main issue right now is relay selection. Th outbox model is too reliant on a few relays like purplepag.es.

We must generalised the use of nprofile more, in followlists and in mentions.

Let’s use bit banging 😎

What’s that 😆

When using code you toggle a signal on and off to produce a digital stream of 1s and 0s

Oh bits… I though you were referring to some weird niche thing I wasn’t aware of 🤣

websockets do add some latency, but if used in the correct situations (long lived connections and/or where data requests are not predictable before connection), they work beautifully.

Having read a lot of others code, the problem is that is garbage. That is the problem, low quality unreadable crap.

yeah, i strongly believe that nostr would benefit from having a plain HTTP API. so i'm building one. i wrote a first draft a few months ago but i'm currrently reworking it to be more compliant with standards for HTTP APIs like using JSON schema.

i'm also making it as compliant as possible with the relevant NIPs too, so it's easy for nostr devs to adopt it.

there will be a full and comprehensive openapi spec for it as well so for most devs it should be just a matter of taking that and generating the code for their project to use it.

If you wanted to you could build an http api in just a few hours. Relays are not complicated, the protocol is simple. The problem with the HTTP api is that you will lose the two way reactivity that websockets provide

HTTP has a thing for subscriptions called SSE. it would have a big benefit over websocket subscriptions because a) it starts instantly (no upgrade chatter) and b) a client only has to open one subscriber, which will then channel all subscription query requests back. i do have to add an endpoint for cancelling specific subscriptions as well for that, but it's still doable

A makes sense to me.

I don't understand B however.

As for B I don't know, I assume server sent events have their own overhead?

Maybe I'm misunderstanding something?

Websockets also only have one subscriber for each filter you want

["REQ","dJ7RhHgcQD3-YisCO72QW",{"kinds":[30010],"#t":["animestr"]}]

which then results in events like:

["EVENT","dJ7RhHgcQD3-YisCO72QW",{ the event }]

["EVENT","dJ7RhHgcQD3-YisCO72QW",{ another event }]

what is difficult about subscriptions is the logic defining when one opens, and when it doesn't, and when it closes

these are defined mainly through the use of the limit. once the number of results exceeds the limit, the subscription is complete. if you don't set a limit, the subscription should remain open until you send a CLOSE

in my opinion, a dedicated endpoint you send a filter to, that ignores the limit, just opens a subscription and this is sent via the SSE channel the client has opened beforehand to receive it, and then you have a "close" endpoint that accepts the subscription identifier.

this also does mean that a subscription data format for SSE needs to be basically the same as an EVENT result, except you don't need to have EVENT prefix sentinel, it can just be "subscription",

i think that SSE standard formatting includes a subscription identifier field also, it can be on a separate line

> these are defined mainly through the use of the limit. once the number of results exceeds the limit, the subscription is complete. if you don't set a limit, the subscription should remain open until you send a CLOSE

that's not how it works. the limit is the **INITIAL number** of events you want, after which the relay will send EOSE. The subscription is still open at that point, and will send new events.

In order to close it you need to send a CLOSE message, or the relay has to send one to let you know it's closed.

no, subscriptions end once the limit number of events have been sent

if no limit was set, and it wasn't set as zero, then it stays open until you CLOSE it, or disconnect.

i spent quite some hours with mike dilger's relay tester fixing my relay logic to behave this way, i can assure you that's how it is designed to work. subscriptions are either CLOSEd or the limit is exceeded.

That's not correct. That's a relau or a client that doesn't follow the spec, read nip-01, I just reread it to confirm

The limit property of a filter is only valid for the initial query and MUST be ignored afterwards. When limit: n is present it is assumed that the events returned in the initial query will be the last n events ordered by the created_at. Newer events should appear first, and in the case of ties the event with the lowest id (first in lexical order) should be first. Relays SHOULD use the limit value to guide how many events are returned in the initial response. Returning fewer events is acceptable, but returning (much) more should be avoided to prevent overwhelming clients.

[...]

["EOSE", ], used to indicate the end of stored events and the beginning of events newly received in real-time.

I also tested it myself just now:

```js

const damus = new WebSocket("wss://relay.damus.io")

damus.onmessage = m => console.log(m.data)

damus.send(JSON.stringify(["REQ", "testing-subs", {

kinds: [1],

limit: 1

}]))

```

so what you're saying is that if the limit isn't exceeded in the initial results before the EOSE that the subscription is open indefinitely?

ok, but see, i'm a relay dev. i have a much better understanding of protocols than the average client dev, so it seems to me like a tarpit trap for client devs writing filter templates for queries if they don't know that.

> so what you're saying is that if the limit isn't exceeded in the initial results before the EOSE that the subscription is open indefinitely?

No. Not even close to what I'm saying.

```

["REQ", "testing-subs", { kinds: [1], limit: 1 }]

```

means create a subscription, called testing-subs, looking for kinds 1, start with 1 single event you have in your database, then keep going.

Running that query will result in the relay sending you exactly one event, followed by an EOSE to testing-subs, and then sending you new events every time a new kind 1 is published.

If you don't trust my code, despite its simplicity, here's the same result from nak, fiatjaf's (the creator of nostr) official nostr cli tool

you can see that it generates the exact same query I wrote:

and if even that is not proof here is the code on

strfr:

https://github.com/hoytech/strfry/blob/542552ab0f5234f808c52c21772b34f6f07bec65/src/apps/relay/RelayReqWorker.cpp#L5

astro:

https://github.com/Nostrology/astro/blob/e28f1d9905b6ae7018161c53b71989cb5c1e385f/lib/astro_web/socket.ex#L24

> ok, but see, i'm a relay dev. i have a much better understanding of protocols than the average client dev

I made this clear a lot, I'm not a good developer, but I am good enough to read the documentation and understand how it's supposed to work.

I know for a fact that I am correct, I have double checked with 5 different relays, they all work the way I said they do

ok, well, i just made sure my relay passed the tests written by mike dilger. i guess i probably wrote the logic correctly, but now you have brought it up, i might have to check that limits are not involved once subscriptions are open after an EOSE

I am so confident that I'm correct that if I am wrong, and am misinterpreting the NIP, I will give you 10,000 sats.

nostr:npub1gcxzte5zlkncx26j68ez60fzkvtkm9e0vrwdcvsjakxf9mu9qewqlfnj5z

nostr:npub1gcxzte5zlkncx26j68ez60fzkvtkm9e0vrwdcvsjakxf9mu9qewqlfnj5z

nostr:npub1xtscya34g58tk0z605fvr788k263gsu6cy9x0mhnm87echrgufzsevkk5s

sorry for the mentions (will zap you if you provide your inputs)

yeah, i just checked my code. once it opens subscriptions it pays no mind to limit values.

i still think that this hybrid of query and subscription is hard to reason about though. i'd bet that a lot of clients and client SDKs get it wrong

What do you mean by garbage? Sending tons of messages by making tons if REQs?

+1

What is the to-avoid doing list?

I would say being lazy is a red flag. I'll give an example.

One time my relay crashed because someone sent an event with an ID shorter than 64 characters...

After investigating, I found that the method event.CheckID (go-nostr library) wrongly assumed that IDs had exactly 64 characters.

I opened a PR but fuck sake!

no, that is fine. By garbage I mean a library that forgets to ping, or to pong, which leads to unpredictable disconnections, or again race conditions that make your program panic, or bad handling of cancellations