yeah, nah, can't put a mutex on that either, not sure why, but i think simply making the DB transactions concurrent and atomic ensures that it will almost never happen that one request stomps on data another request is accessing

it was worse before, heavy load would have definitely caused tx commit failures but i think now it's nearly zero chance of happening, so performance is maximal and reliability as well

Reply to this note

Please Login to reply.

Discussion

it sorta seems like it doesn't make sense that you can have a data store being handled by multiple processes at the same time, but this is the wonder of ubiquitous multiprocessing inside extremely fast memory caches on modern CPUs, the queries can come in, and the multiple threads can literally be accessing the same pieces of memory at the same time (though usually from different copies that have reached L1 cache for a core) and voila, race condition

so it's really FKN fast, but has this problem that processing can get out of sync, and the main thing you have to do to resolve this issue is not do many things inside a DB transaction

to make an analogy, imagine if instead of bitcoin being entirely distributed, and instead there was a small group of aggregator nodes that everyone sends their transactions to... but they send them to different ones at different times

when the aggregators push everything together, it can happen that two transactions are different, a so-called "double spend attack"... yes, the problem i just fixed prevents the database equivalent of rewriting a record two different ways in too close a time period to isolate them - well, doesn't prevent it, but makes it shrinkingly unlikely because each individual write is now isolated in one item in the database log and thus the chances of them having a temporal overlap is now basically zero

a lot of waffle just to say "replicatr event store will handle extremely high demand when it comes"