yeah, it is a lot simplified, and i had an AI help me make it fully pass all mikedilger's relay tests, and it seems to be pretty solid now, and should be maybe one of the fastest relays around because of its very fussy and specific memory handling.

a lot of parts of events are not copied during the decode process, mainly tags and content fields, the un-quote process also operates in-place. it did used to in-place do things to the hex encoded fields as well and maybe i should return it to doing that but for now they just allocate an extra half as much new memory for it. but it got a bit complicated sometimes, idk, i don't think it will give that much benefit.

the encoder for the database is fast tho, faster than fiatjaf's, as is the JSON decoder for filters and events. the event decoder is almost as fast as decoding a binary event, i'm kinda proud of how well it works. wrote a state machine using goto statements :)

oh yeah and the database indexes are designed so it doesn't decode events without knowing already they are correct, when i first started working with the badger database engine that fiatjaf wrote, it has less than a complete set of indexes so you have to decode a lot of events often just to see if they match.

anyhow, yeah, i'm in the process of building a subscription management system for it where you just zap the relay's bot-npub and then gonna make an extension to the privileged privacy protection handling so it will also mark kind 1 events with a specific tag to be privileged so it can message you in your main feed like when you pay, it will tell you when it will expire and remind you a day before it expires. the idea is that i don't have to write any UI for it, and combined with #jumble it would be pretty cool for community relays with membership fees.

i am about to start learning to work with typescript tho. dreading it but i'll get paid and later on there will be some fun stuff with building AI agents and whatnot. so i'm gonna do the front end work and finally be officially full stack. and then i can also use those skills to add UI to the relay.

Reply to this note

Please Login to reply.

Discussion

oh, i forgot, it has a spider too, that fetches all the "directory" events - follow lists, profiles, mute lists, relay lists, or optionally even pulls in all events from other relays, of the first degree follows of the designated owner npub, as well as the second degree, the follows of the follows. i have like 70 follows and if you set it to full spider it is fetching the events of some 10k npubs lol.

i also forgot another feature, it has a replication protocol so you can run several relays, you set them up with a secret relay key and then use the pubkey of the other relays in the cluster, and every event that goes into one gets pushed to all the others. this is to enable a high availability or multi-location relay network, how you set that part up is however you want. a multi-IP address DNS can be used or you can just have them on like regional relay subdomains but have them all linked together as one database.

very basic but it would be especially good for reliability and censorship resistance (eg, putting several relays up in several diverse jurisdictions)

Yeah, the spider seems to be working. So, it's a personal aggregator, right?

yeah, it's open ended how you might want to use it.

to really be just a personal aggregator you might want to have a mode where it only fetches first degree follows. make an issue on it if you would like that mode, it would be quick and easy to implement.

i'm just kinda focused right now on building the relay zap to subscribe feature :) but you could easily twist my arm to limit the spider fetching to just one degree of follow.

Oh, I don't have that many follows and they don't have that many follows, so it's actually not very crazy.

Half of my follows are also me. 😅

But most people might have too many. I'll make an issue.

It writes! 🎉

yay

i just realised, because i wasn't expecting anyone to actually use it so soon, to make sure it doesn't send back too many events in one go. a cap of 1000 should be ok i think. i may have already covered that. wouldn't want to open it up to such an easy DoS

oshit i didn't actually implement the writing of events in the response haha

tag v0.4.8 actually implements the events endpoint now. haha it was running the search and returning null.

it should now return up to 512 events as an array of event objects.

this is all handled by the huma library automatically, so it's not so hyper-optimized on the encode step.

i did that to make it more openapi idiomatic.

also, you don't necessarily have to have the owner npub that you set, to be your own. you can make another one and curate the list manually.

this is why i made it this way, so it uses very commonly supported event types to function as a relay whitelist.

that is, you could make a second account in like nostrudel or a different browser profile and all it does is control that list.

Ah, okay. So, can be sort of secret through obscurity.

yeah, if you set "ORLY_PRIVATE" and/or "ORLY_SPIDER" mode to "none" it won't run the spider. but the whitelisted users will be able to post to the relay and will use it with outbox for other users who set the relay in their relay list

ah yeah, so, just to explain, this ties into the feature i'm working on now.

when the zap-relay-npub-to-subscribe feature is enabled, it will add a subscription secret key identity key which you link to a LUD16 LN address, and then make a connection on your NWC capable LN node (alby hub has this), the subscription management npub will automate updating its follow list, and this will then add to the relay's whitelist, enabling the user and whoever they follow.

the idea was it's very simple to create a nostr client that operates an nsec just for this purpose, and then orly will automagically adjust the relay whitelist as requested by the owner.

for a pure pay-to-play relay configuration it could be the only owner in the list. it's just flexible in that it allows you to also use it as a personal relay and aggregator. auto-inbox-capable with the second degree, for anyone you follow, if their client pushes DMs to the designated inbox relays.

Okay, well it's running. I'm in. ☺️

GN

i implemented the two features, you'll see it on the issues.

- can now configure the spider frequency

- can now restrict the non-ACL event spidering to only go first degree (ie, you can spider everything but just from your first degree follows)

- "directory" only spidering now also includes "guest" mute lists (guest = follows of follows), just for completeness so when they use the relay they get this list. most annoying to have it disappear or be unavailable.

now available on v0.4.11

i just made a small change, if directory is on, but second degree is off, it still does a fetch of *directory* events for second degree (otherwise guest users would not see their profiles and lists, and whitelisted users would not have these important events available for their follows)

now, back to your regularly scheduled programming of NWC subscription by zap implementation

just had a random thought too... when users post events referring to other events, if the reposted event entities have relay hints, the relay could fetch the referred to event as well... unless it is on the owners mute lists. this would end the "event not found" message if you were using a client that doesn't parse that entity and fetch it for you.

probably not very important tho. but it would be a nice feature.

#ORLY relay is all about privacy, personal, community/association and business relay use cases. the spidering stuff is more about the first two use cases tho.

Yeah, I'm thinking about forking it and also turning it into a broadcaster, with a sending-queue, so that I only ever need to use orly in a client.

Adding the feature of auto-syncing in the background, depending upon the quality of the Internet connection. And BT-LE support.

Just pack all of my network-y stuff and business logic into Orly and then the client is just application logic. And then do the same with Citrine.

Then they're like local filter.nostr.wine relays.

i have also thought about this "relay as proxy" idea as well, but the limitations of nip-42 auth make it difficult.

but if you added a HTTP proxy to orly, and configure your client apps to use the http proxy, then it can intercept requests to itself, ok, fine, but then your client can auth to remotes, and then orly takes over and can then publish to them with auth enabled.

this is a bit of a complicated, advanced feature. part of the reason why i haven't done it is because HTTP proxy auth methods are pretty weak and i wouldn't trust that on a remote server.

but it could be done.

extending nip-42 auth and nip-01 req/event methods could also fix this problem by enabling relays to issue an auth token that can be used in place of the regular auth flow, and then the relay can proxy these queries, cache the results and then forward them back to you.

there is multiple ways to solve this problem but the first version with http proxy interception is probably the most practical right now.

I wouldn't do this on a remote relay, yeah, too insecure. It's actually building a tiny backend-client into the relay, to handle traffic cross-client.

ah yes, that's the easy solution, local only and configure it with your nsec so it can auth to remote relays as you. then it can read and write to your relay list when they have auth required.

that's probably something you can actually feasibly do without having to step outside of the confines of the nostr protocol too.

i definitely want that PR merged if you make it!

if the changes you make fit with the existing code, do please PR them back :)

the problem of proxying auth to paid relays tho, as i mentioned, probably the best solution for that is a http proxy interceptor that captures auth messages destined for outside and then once it has an authed socket it can fetch on that channel as well.

it's complex stuff tho, advanced network programming. i totally want this too, have done for at least a year now, but it's complicated to implement. same with the thing you are saying about internet connection monitoring. this requires some glue into system services like network connection lists and a ping/bandwidth monitoring tool to dial back the activity when it disconnects or is on a thin/metered connection.

whatever you manage to do tho, if you think it can fit with the existing repo, PR it back when you have it working, and i'll merge up ASAP.

i'm not an ambitious man, but i guess i could say that i want orly to be the new strfry. and then, ideally, to have a whole business model around using this tech so that i can pay devs to improve and maintain it and build out auxilliary services. that would be rad. low key.

I've built mutliple versions of the network monitoring into clients, already. I think next-alex already has it running. 5 minute intervaly with a background auto-sync into the local cache.