Wrote a quick and dirty trending notes concept today powered by our aggregator relay. It tracks replies, reposts, reactions, zap count, and zap amount total on an hourly basis for all notes and allows queries over a time period with sums. It will also support overall counts.

Don’t have anything to show yet since it’s just a database but I’m thinking I’ll expose it via API and an ugly front end. Would this be useful for anyone?

Reply to this note

Please Login to reply.

Discussion

I’d love to make it available through a relay so any user could use it on any client but most (all?) clients will sort the notes by timestamp so it’s hard to actually “rank” the notes.

It can still provide the “trending” notes for the last X hours but they will get sorted by time client side, not popularity.

Having it on a relay would be cool, you could view just that stream with clients that support global directed at a single relay. And a bot could hang out in global and boost/report on the trends

Yes, I will offer it as a relay however clients are going to sort whatever notes we give them by created_at time (unless they create a separate view).

So I can say here are the top 50 notes from the last 4 hours, but they won’t be ranked by popularity beyond that, only time.

How about a new TOP verb for relays that would accept filters like REQ does but would return a list of ids of "best" events? I have it implemented on relay.nostr.band for kind 0 and 31990 and used in a couple places already our clients. You would get say top 100 ids and then would load events themselves in batches for pagination.

Requires new code on clients ofc but seems like a good general approach.

I’m in to it, thanks for pushing us forward. Happy to adopt whatever standard you guys are using/building. Interoperability of our services is key.

Try feeds.nostr.band/popular only serves notes with 10+ interactions

Cool, I was able to try this with my new amethyst patch. Works I think, although, my client doesn't see the 10 reactions for most of these, interesting. Makes me wonder how many reactions I'm missing due to blastr effects.

Try adding relay.nostr.band to your relay list. Although Amethyst is pretty aggressive in it's spam filtering so it might just block those interactions and not count them.

There is a problem with any type of aggregation at current stage. Any relay (even taking the best attempts to collect all the data from all other relays) — will never have a complete picture of events. And thats rather good for the network.

However, aggregation in this case becomes the source of truth that is flawed by default. Of course, it may be used for some estimates but it locks you to the specific relay instance

On the other hand client aggregating all the events from different relay (say, regarding zaps to specific note) is very inefficient and slow but it is able to find “source of truth” on its own

Yes, totally agree. This is the main trade off.

I am not (and don’t want to be) a client developer so a lot of the innovation I can experiment with on the relay side is inherently centralized.

The good news is the data is public so users don’t have to blindly trust our counts, they can verify themselves. I think we will continue to see an increase in the availability of caching/archival services so that at least there are many sources of truth.

Centralized part of nostr should definitely evolve, too, the main question is to how properly integrate it into decentralized design

Make an API and approach Kieran, he was kind enough to add our and Semisol's api outputs on the Search page of Snort.

I just thought of something nobody is really doing yet for trending. A trending list from just my follows or close in network.. ie, the most like/zap/commented but just for my network. Would be handy.

True! We could do something like this using similar logic to the follows+follows network in filter.

I’ll also be able to showcase what’s trending amongst paid nostr.wine users vs what’s trending globally.

I got a version up and running on a server today so the data is officially being populated as we speak.

Next step is to expose an API so developers can play with hourly event metrics!