Replying to Avatar Jameson Lopp

Yeah we get it: you're incapable of understanding complex nuanced engineering issues and resort to simpler conspiracy theories that make sense to you. Consider that some Bitcoin implementations don't even have the concept of a separate UTXO set data structure... as Erik Voskuil recently explained:

Libbitcoin is a set of libraries, libbitcoin-database being one of 4 that make up node (system, network, database, and node). (libbitcoin-server is an additional library that adds comprehensive set of client-server interfaces, making the node useful.) libbitcoin-database is an implementation of a simple query interface (defined as a c++ class) over a backing store. The store is a templated collection of tables. The tables are mmap-based head and body files used to construct multimaps, hashmaps, arraymaps, arrays, and blobs. We mock the tables using std::vector, mock the store using the the mocked tables, and test most of the query interface using the mock store. The structure is highly relational, and surrogate keys are exposed to the caller for optimized navigation.

This is an isolated and clear storage abstraction layer. All validation is performed within the chain:: classes (e.g. block, header, tx, input, script, etc.) defined in the base level libbitcoin-system. There is no validator coupling to the store. The store retains chain objects, indexes, and validation state for headers/blocks and txs as they progress through a state machine. There is no utxo table, just natural relations between objects, indexed and related. Validation correctness of course requires store fidelity, but is totally decoupled from it. We validate blocks concurrently, queueing up 50,000 blocks at a time by default (e.g. with 50k threads we would validate all at the same time).

The store could be replaced with no impact to the query interface (as we already do in testing). So it's not really accurate to imply that libbitcoin's validation is tied to mmap or even append-only. Pruning could be implemented in the existing model for example. The existing store could be replaced with something simple and light like SQLite, or a full RDBMS. We had some interns working the SQLite approach last summer. That would be more specialized for low performance scenarios, where the custom database targets ultra high performance.

With sufficient RAM there is never SSD access. The store can sync up and just live in RAM, never touching a disk. Since it is append only, it's very low impact on SSDs. As the store builds, no table body byte is ever re-written. Table heads are hashmaps buckets, small and dynamic. It performs live automated/manual snapshotting, automatic fault detection/recovery on restart, automated disk-full pause/restart, and supports hot backup. Query performance is phenomenal. A warm node can query the full 5.2 million output Electrum query (very complex relations) in 15 seconds on my 2.1GHz workstation. But at the low end, an off-the-shelf store is sufficient. A clear interface and swappable store makes a lot of sense.

A large utxo set makes no difference in this design. There are no operational problems associated with it. This is not theoretical, at this point we are only working on server (client-server interfaces). The utxo set size is a complete non-issue.

Citrea is bloating the utxo set, fuck those shitcoiners and scammers, including Jameson Slopp who is investor in Citrea scam.

Its their words that they want to turn Bitcoin into an Ethereum like shitcoin.

nostr:nevent1qqszex6td0apknuk4d780j9rw48uqkat3380l0q388ne7068yp5wpzqppemhxue69uhkummn9ekx7mp0qyg8wumn8ghj7mn0wd68ytnddakj7qghwaehxw309ahx7um5wgh8vatvwpjk6tnrdakj7qu8wts

Reply to this note

Please Login to reply.

Discussion

No replies yet.