Just thinking out loud, what if normalized data, like time-series entries were notes. It's inefficient but extremely powerful.

Each note would have to reference the table name, and any other meta information. But then each data point is now a signed entry available to anyone for filtering/sorting.

Anyone could provide their own data to compete for that data point.

Consumers of that data set could then choose their sources, average their sources, etc...

This would changed the way data science, finance, etc... is done.

Think crowd sourced excel.

Reply to this note

Please Login to reply.

Discussion

So, time series/CSV data within .content?

You’re right, it sounds terribly inefficient.

Good idea, but could have problems pulling from different relays that may not have all the time series events.

Paid relays with more complete sets πŸ˜‰

I think the efficiency cost would negate any economic advantage.

It's not thaaat inefficient.

Data published as a massive set of events will probably never be an efficient means to filter/sort through but could very well serve as an excellent bussing mechanism between unrelated systems that can ingest and store that data in a way that is efficient to query. Bonus: the events are signed and verifiable. This potential is what excites me most about Nostr

Decentralized excel?

Date/time would be easy to agree on, but you'd have to distribute the dimension data as well, and data providers would need some kind of discovery mechanism for finding the "master" dimension records. Relays could compete to provide these, but you'd want guarantees that your dimension records will last, and then IPFS makes more sense