File sizes have actually been getting steadily bigger, as compression algorithms are more effective on a large document and the sum total of the data set requires less storage and is faster to query. People storing lots of data on one hosted instance, with one entry point, save money, RAM, computation, and human effort, if they keep the files large.

But a distributed, eventually-consistent storage system, like Nostr, has the opposite problem: it benefits for having lots of little files that are spread around, slowly and thinly, and then gathered and assembled, quickly, when used.

Reply to this note

Please Login to reply.

Discussion

Thought this might be an interesting thought, for the #meshtastic and #lora gurus on here.

This works because most people won't need to access most of the data most of the time, and the data is redundant, so you can have n entry points to the same data because the data has clones all over the place.

This creates and n:n system with no bottleneck.

This is why I harp on format. Small files suffer more from format and metadata overhead.

That's why it's best to keep publication metadata only in the index and have the content only in the sections, so that there is less redundancy.