menarik baca - baca tentang konsep gossip model ini. habis baca nip-65, lalu loncat ke blognya nostr:npub1acg6thl5psv62405rljzkj8spesceyfz2c32udakc2ak0dmvfeyse9p35c baru mulai ngeh. konsep nya bagus. aku lihat kuncinya di personal relay yang mengizinkan pihak luar buat nerima event kind-10002 yg berisi daftar list relay. user jadi punya database sendiri, dan jadi lebih cepat dan efektif ga lagi tergantung ke relay besar. kalau gossip model ini banyak yang pakai, relay - relay pribadi pasti akan lebih banyak dan data jadi makin terdistribusi. untuk mengakses note dari orang orang yang sudah maupun belum kita follow jadi ga perlu lewat atau bertanya ke relay besar lagi.

tapi bagaimana dengan scallingnya ya? misal suatu waktu user nostr mencapai 100 juta pengguna, perlu storage berapa besar buat nampung semua event kind-10002 ini ?

mohon dikoreksi bila ada yang keliru. thank you!

Ok, since you've tagged Mike. I think we need to change into English discussion. 😅

Mike has already give us a bit answer below.

I just want to add a bit,

> How we can scale on later? Let's say there are 100 million users using Nostr. How much data storage later that we need?

If we implement NIP-65 in many clients we can reduce the burden of relays. Since not all relays have to store all our nostr events. Many relays just need to store kind 0, and 10002 for discovery. Those data are very small between 0.5-2 KB. Assume 1 KB of metdata event, we just need to store around 100 GB to store the latest event for kind 10002 for 100 million people as the example. It is quite easy to store that in the server.

Since users can use different relays, the data (notes, likes, zaps) doesn't have to be stored in all relays thus reducing the burden of relays.

In clients, we just need to cache profile data (kind 0) and relay list (kind 10002) of our follows (small maybe we follows 1000 people). Smart client will fetch data only when we need it saving the bandwidth.

Reply to this note

Please Login to reply.

Discussion

No replies yet.