It’s definitely doable but can take quite some time and resources, so it might be a fit only for archiving old data. Eg for 422k likes it would be several hours and up to 1TB of ram which would cost $6-8 per hour. The cost can be justified if the events are quite sensitive but def not for ordinary posts:) We do expect 10-100x perf improvements in the nearest future though so it might become viable at some point
Discussion
Check this sir nostr:nprofile1qqs0dqlgwq6l0t20gnstnr8mm9fhu9j9t2fv6wxwl3xtx8dh24l4auspzamhxue69uhhyetvv9ujuurjd9kkzmpwdejhgtcpzemhxue69uhkzat5dqhxummnw3erztnrdakj7us2nqv . Michael is from my team. We are cooking some cool shit that might unlock this in the future.
Looking forward to it guys.
Maybe it doesn't help, but I'll throw it out there. HyperLogsLogs give an estimate of the cardinality of a set, and are one of the most efficient ways to reduce memory/storage requirement when it comes to counting unique elements.
If you don't know, they work by keeping track of the max of the number of consecutive 0s (or 1s, doesn't matter) in the hash of the elements.
example
"test1" --> hash --> 01001111 (4 consecutive "1"s)
"test2" --> hash --> 10101101 (1 consecutive "1")
Maybe there is a ZK version of an HLL, where I can prove that the maximum number of consecutive "1"s was X, which allow me to estimate the cardinality of the set
We need to talk about nostr privacy and scalability using ZK Stark tech.