yes, i removed the replaceable delete function, you can literally just write a filter that asks for an unbounded number of replaceable event kind associated with a pubkey and the newest version comes first and all the rest

you'll find it's really easy to modify any relay to do this, just change the special cases for replaceable events

as someone pointed out to me already, clients already sort and pick the newest if multiple come back from these queries, so it's just a matter of removing that racy delete

also, deleting events other than when the author (or admin) sends a delete message is a silly idea, much better to have garbage collection that just scans from time to time and removes the most stale versions long after they are out of date

if clients were more savvy with this, they could easily implement rollback when you make a wrong update

Reply to this note

Please Login to reply.

Discussion

yes, this rollback / revision control is what I wanted to implement in wikifreedia for weeks now but hadn't had the time to modify my relay

are you storing full events locally or are you storing deltas and computing the full payload when serving them?

do you have this running on replicatr? any URL I can test on?

storing full events

you have to enable the GC size limit for it to have a high and low water mark, and you can additionally configure that if the defaults don't fit your case, and even further, you can create a second level data store, which would presumably be a shared data store that is accessed over the network, and the headroom above the high water mark will then store the indexes of events that have fallen out of the local cache but still allow fast filter searches

https://mleku.net/replicatr is the core, which is a fork of khatru, and https://mleku.net/eventstore is the eventstore with the GC enabled for the "badger" and there is a "l2" event store that lets you plug in two event stores, one is usually badger, and the other can be anything else, and there is a "badgerbadger" which i wrote using two levels of badger event store, one with GC on and L2 enabled that tests the GC once your event and index storage size exceeds the size limit

btw, fiatjaf is wrong about badger, he just doesn't know how to use it or write good, bugproof binary encoding libraries... the batch processing functions are incredibly fast, like, 15gb of database can be measured in ~8 seconds and if a GC pass is needed that might take another 5-12 seconds deponding on how far over the limit it got

also, yes, that will scale, on a 20 core threadripper with 40Mb of cache and a 128gb of memory it would probably zip through that job in less than half that time

how much has replicatr and your eventstore deviated from khatru and fj's eventstore? is it a drop-in(ish) replacement? almost all my custom relays are based on khatru.

do you have NIP-50 support on your eventstore? I needed to add that for wikifreedia's search

the eventstore is almost drop-in except for the definition of the (basically identical) eventstore interface

most code written to work with khatru's arrays of closures can also be quickly adapted

no, i haven't got to doing - full text search, right? it requires writing another index, though that may be easier to get happening sooner if you use a DB engine that already has that as a turn-key option

the Internet Computer database engine has some kind of complex indexing scheme on it and likely would be easy to make it do this but the badger event store is bare bones, all it is built to do is fast filter searches and GC... it would not be hard to add more indexes but it would be a couple month's work i'd estimate

well, i think i could get MVP in 1 month anyhow