menarik baca - baca tentang konsep gossip model ini. habis baca nip-65, lalu loncat ke blognya nostr:npub1acg6thl5psv62405rljzkj8spesceyfz2c32udakc2ak0dmvfeyse9p35c baru mulai ngeh. konsep nya bagus. aku lihat kuncinya di personal relay yang mengizinkan pihak luar buat nerima event kind-10002 yg berisi daftar list relay. user jadi punya database sendiri, dan jadi lebih cepat dan efektif ga lagi tergantung ke relay besar. kalau gossip model ini banyak yang pakai, relay - relay pribadi pasti akan lebih banyak dan data jadi makin terdistribusi. untuk mengakses note dari orang orang yang sudah maupun belum kita follow jadi ga perlu lewat atau bertanya ke relay besar lagi.

tapi bagaimana dengan scallingnya ya? misal suatu waktu user nostr mencapai 100 juta pengguna, perlu storage berapa besar buat nampung semua event kind-10002 ini ?

mohon dikoreksi bila ada yang keliru. thank you!

Reply to this note

Please Login to reply.

Discussion

I think the idea of putting kind-10002 events all over the place is temporary. I'm hoping for better solutions. Pablo runs wss://purplepag.es/ which is dedicated to ONLY hosting 2 event kinds including kind 10002. Also that event is designed to be as small as possible. So in terms of scaling out with many of those events, I don't know how we could do better.

Just because we can all have personal relays doesn't mean we will. Having thousands of people on a relay is fine.

Ok, since you've tagged Mike. I think we need to change into English discussion. 😅

Mike has already give us a bit answer below.

I just want to add a bit,

> How we can scale on later? Let's say there are 100 million users using Nostr. How much data storage later that we need?

If we implement NIP-65 in many clients we can reduce the burden of relays. Since not all relays have to store all our nostr events. Many relays just need to store kind 0, and 10002 for discovery. Those data are very small between 0.5-2 KB. Assume 1 KB of metdata event, we just need to store around 100 GB to store the latest event for kind 10002 for 100 million people as the example. It is quite easy to store that in the server.

Since users can use different relays, the data (notes, likes, zaps) doesn't have to be stored in all relays thus reducing the burden of relays.

In clients, we just need to cache profile data (kind 0) and relay list (kind 10002) of our follows (small maybe we follows 1000 people). Smart client will fetch data only when we need it saving the bandwidth.