Avatar
Adam Ritter | rbr.bio
6e3f51664e19e082df5217fd4492bb96907405a0b27028671dd7f297b688608c
Creator of rbr.bio and nostr-relaypool-ts

New features for developers in rbr.bio: info.json gives back the information presented on the page on an author (and a bit more): metadata, contacts, number of followers, metadata and write relays of followed authors.

This makes easier to develop a very light client. Making changes in the server code HTML all the time doesn't make sense for me for developing, but this can be used by other clients as well.

BTW it's crazy that here I'm getting real answers from real people...on Twitter I was just not ,,important enough'', so I didn't even bother to try to communicate.

Thanks! I'm sure we'll hear more about it if it takes over other relays :)

Thanks, it's crazy that I never heared of LMDB even though it's old software

Hey Guys, #[2] was talking about a fast relay server, but I couldn't find it in awesome-nostr links. Do you guys know what he was talking about?

Verifying my Nostr Nests identity: 4Rlqa72a9eUYdHi_OvOBL4kwEAzrcBacL1oPJC9rtvQ

https://nostrnests.com

Replying to Avatar fiatjaf

One advantage of https://github.com/nostr-protocol/nips/pull/158 over NIP-26 for key security is that it only requires a single app to monitor key invalidations and update your list of followed keys for a key to be invalidated/rotated mostly successfully.

While NIP-26 requires all apps and relays to support it for it to work.

However, NIP-26 could work with much less than 100% if it was only used for sporadic delegations for niche use cases, or for grandfathering keys in custodial services.

I definately want to generate keys by hardware wallet.

One question that I don't see to be answered is why not just always create a new derived path instead of needing to contain the next key.

That way clients could verify that the 100th derived path is greater than the 20th, so the 20th is invalidated, so it would need less storage on the clients.

Are you working on putting it into @npub1wnwwcv0a8wx0m9stck34ajlwhzuua68ts8mw3kjvspn42dcfyjxs4n95l8is? Or what kind of experiment are you thinking about?

My top priorities are:

- Data freshness (right now data is about 1 hour old)

- Data availability (some of my followers seem to be missing, I don't know why)

- Latency (already have an EU server at eu.rbr.io, I did't set up load balancer yet)

- Example on how to use it / putting into RelayPool library for easier integration ito clients

Proxies are great, but the problem to solve there is the IP based banning, which can be only solved by paying for reading a small amount of money and delegation.

What I'm doing (rbr.bio) is a way to at least know where to read from: you can get the metadata or even just the write relays (just added now as JSON :) ).

It means that the clients just have to read from a few of the write relays to get the data.

One of the interesting things would be to make ranking more configurable and separate, so that we can play with it easily.

One example is that I would prefer to see posts from people where I like them more often. But there are a lot of ranking choices to be made, and A/B testing is super important (having multiple ranking algos and switching between them would be even cooler, but I know that it's not the highest priority in client development right now)

#[0] Pagination for rbr.bio followers implemented

It's great to feel the toxic-only people vibe here. I guess those non-toxic people will arrive when Nostr gets over 10 million users

#[0] https://rbr.bio/ has now a list of the 100 most followed followers shown for users as well (no pagination yet)

rbr.bio has now search functionality. Also service is more stable, and data is updated every hour. As I would like it to be the best generic contacts and metadataserver, the data has to be live of course.

*test* _markdown_

My favorite search engine is [Duck Duck Go](https://duckduckgo.com "The best search engine for privacy").