haha, nice
yeah, i want to make a full graphql and FTS
what you speak of sounds like it's verging towards smart contracts
i have already eliminated replacement with returning the latest of a replaceable type, and have intentions of giving access to all records over the wire to administrators
i see no reason to make another KV backend... rocksdb is the most advanced before 2016? badger is the next generation after rocks, this hype about LMDB is so uneducated nonsense, it's like the hype that dan larimer brewed over his mmapped "graphene database" which isn't as advanced yet as dgraph
anyway, long story short, the relay is an interface
the data is a separate layer, it is layer 2
that's how i'm playing this
https://foundationdb.org is what I want to use actually.
you can add more nodes and it would add that to your available capacity, you don’t need to replicate your DB to have multiple frontends and you can outgrow what you can fit in one server
i don't want to be harsh but
fuck apple
it was independent & paid until Apple acquired it and open sourced it, it also has another big user that’s unrelated (Snowflake)
but does it have separate key and value logs?
distributed protocols are pretty solid with pBFT, you know those shitcoins? yeah, <100 node with <2s convergence
that protocol was invented in 1999
if i could spend my time on one thing to the exclusion of any other, it would be to make http://github.com/technicolor-research/pnyxdb with a badger kv store
uses web of trust and with the split logs it can make extremely fast indexes that you simply can't even do with even rocksdb
idk why anyone even cares about LMDB, this is 15 year old tech
“but does it have separate key and value logs?” what do you mean by this?
see, this has existed since 2016 but you didn't know about it... here, let me show you something:
https://www.usenix.org/system/files/conference/fast16/fast16-papers-lu.pdf
february 2016
split key/value log based key value stores
you're welcome
also, wow
i never expected to be 8 years more up to date about data storage algorithms
Thread collapsed
I honestly don't see the benefit of that anymore with the designs of newer KV DBs.
the reason why it has an advantage is that you can create massive indexes that are append only
this was why they used it to build dgraph
the data logs don't need to be updated when indexes change, which means you can change indexes a lot more, and that's necessary if you want to run a graph database on top
Thread collapsed
pls name a newer desgin than wisckey
i just don't know what you are talking about, i'm talking about a local, key value store, on a single system, running with a flat-latency SSD data storage device behind it, like rocksdb
zippy just adds a centralised sharding algorithm to enable it to be more highly available
they don't fix the indexing problem that badger does
I'm talking about a KV store that is distributed. FoundationDB uses the SQLite btree code, or their own own btree based storage engine on each node for local storage. here's a talk about it while it was still in development: https://www.youtube.com/watch?v=5iqKu1pVDvE
that's my point, you talk about distributed, then it's not about the storage tech on the device, and there is no other than badger that makes indexing as cheap and fast
also, don't pollute my feed with apple tech bullshit
apple haven't done anything new since Lisa
Thread collapsed
Thread collapsed
also, i will be adding a full FTS to the eventstore i have been working on, it's my next priority for it, i'd be building that already...
i don't know what more really needs to be done to improve the functionality of data stores for #nostr - i think further advances would have to involve different types of data than kind 0 and 1s
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed