I'm beta-testing the new #orly relay, from nostr:npub1fjqqy4a93z5zsjwsfxqhc2764kvykfdyttvldkkkdera8dr78vhsmmleku .

https://github.com/mleku/orly

It features:

* relay clusters that auto-sync

* timed auto-spidering of a custom relay list, for up to 2 degrees of npub separation

* AUTH and support for most NIPs

* super-fast Badger implementation

* REST API (yes, Nostr over HTTP)

* SSE (server sent events, unidirectional HTTP streaming)

Try it out, give it a spin, and kick the tires a bit. ☺️

nostr:nevent1qvzqqqqqqypzqnyqqft6tz9g9pyaqjvp0s4a4tvcfvj6gkke7mddvmj86w68uwe0qyghwumn8ghj7mn0wd68ytnhd9hx2tcqyzhzkpvgptmzd8k8tf2crkf9k29k6w0e2cuedpar3dfrwmndu5up28vkkty

Reply to this note

Please Login to reply.

Discussion

Damn. That's fucking fast. Zoom zoom.

would be awesome to do a benchmark... test how fast the relays swallow a trove of events and then how fast it searches a variety of filters on the event trove that has been uploaded.

but i'm kinda too busy with feature creep right now haha. but i'm pretty sure it is extremely responsive and fast at delivering and accepting events.

It's like a #NostrHub, now.

Then I can just have Orly/Citrine and TheForest as inboxes, now, right? 🤔

nostr:npub1w4uswmv6lu9yel005l3qgheysmr7tk9uvwluddznju3nuxalevvs2d0jr5 the background auto-spidering is pretty cool, amiright?

yeah it is kinda cool but i have 10k users in my second degree follow graph and there's too much rubbish in there so i turned it off. but that feature you requested, to only fetch the owner's follow events, will fix that.

which is why i'm implementing it tomorrow.

If this is of interest, I can put a PR up with some initial benchmarks.

Great job with this! 👏

that would be most welcome :)

nostr:nprofile1qythwumn8ghj7mtvv44h2tnwdaehgu339e3k7mf0qyghwumn8ghj7mn0wd68ytnzv9hxgtcqypxgqqjh5ky2s2zf6pyczlptm2kesje953ddnak66ehy05a50caj75ukn0y here you go: https://github.com/mleku/orly/pull/4

I originally ran the benchmark on v0.4.8, but when I pulled your recent changes from 0.4.14, I saw large performance improvement. Great job!

very nice. merged. i figure this tool can be pointed at other relays for comparison also.

i'm surprised how fast it's saying the events were published tho. if you use the import tool on a trove of events of around 117k, it takes about 15 seconds to swallow the whole lot, which means it's more like 8000 events/s

but maybe you have faster, next generation hardware, my pc has like 3 years ago tech in it. i've used a hetzner server that was about 3-4x faster at everything in my recent previous project with a large database iteration (comparing a set of 1000 records to each other, N(N-1) operations, something like, what was it, 400,000 operations in total. this included standard library JSON decoding. i was going to optimize that stuff with a decoder cache but it was working well enough for processing one row at a time.

What's your hardware? I'm running on an NVIDIA RTX 3060 with i7 CPU and 32 GB RAM.

# System Details Report

---

## Report details

- **Date generated:** 2025-08-05 18:53:35

## Hardware Information:

- **Hardware Model:** Primux Tech Primux_PTIOX

- **Memory:** 64.0 GiB

- **Processor:** AMD Ryzen™ 5 PRO 4650G with Radeon™ Graphics × 12

- **Graphics:** AMD Radeon™ RX 7800 XT

- **Disk Capacity:** 4.0 TB

## Software Information:

- **Firmware Version:** F19c

- **OS Name:** Ubuntu 24.04.2 LTS

- **OS Build:** (null)

- **OS Type:** 64-bit

- **GNOME Version:** 46

- **Windowing System:** X11

- **Kernel Version:** Linux 6.14.0-27-generic

yeah, i underestimated how old it was

https://search.brave.com/search?q=AMD+Ryzen%E2%84%A2+5+PRO+4650G&source=desktop&conversation=030c13df9874830d2b8157&summary=1

came out in 2020. the video card came out end of 2023

obviously if i upgrade my motherboard/memory/cpu i'll probably see a big jump in performance, at least double if not more.

Those benchmarks are just for encoding/decoding in memory. The 8k/s import speed includes database writes, which are way slower than pure codec operations.

what i was more so interested in was a tool that you run the relay and it hurls events at it and then queries it and times the full operation. such a tester could be pointed at any relay implementation then. certainly then you could add a script that builds and installs khatru and relayer, and probably since the secp256k1 library install script in scripts puts all the C/C++ things in, probably can also do strfry and the rust relay as well, by adding a rustup installer

i might do it myself at some point but feel free to do it if you like. it shouldn't be hard to add this to it.

seeing a side by side comparison of all of the relays would be great for seeing how good performance is but also how good each different relay is in comparison with each other on the same test workloads.

That's interesting... I'll work on this.

i couldn't get it to all run, anyway, as we discussed, when it's all wrapped in a docker it will work for everyone.

now i've removed all those pesty serialization calls in the logging into closure logs, when logging is at default, it will go as fast as it can go.

i'm really curious to see how that works. i'm also curious how it comes to be that relayer is so freakin slow. and lol strfry. really. my goodness it's awful, and it's one of the most commonly deployed relays? wow.

probably lucky we haven't had the massive influx that some keep on complaining about.

ah.

still, that's incredibly fast. i knew they were fast because i already wrote a benchmark for the json and binary codecs and they take tens of microseconds on my hardware.

damn, now i dang want an upgrade of my mobo/ram/cpu