Those who know C++, how does bitcoind (Bitcoin Core) read/write data from/to disk? I'd love to tweak certain settings for my public node. Is bitcoind using specific page sizes? Please answer here, if possible:

https://bitcoin.stackexchange.com/questions/120567/page-size-of-read-and-write-operations

Reply to this note

Please Login to reply.

Discussion

I'm not qualified by any means to answer just trying to help and learn at the same time so treat this comment accordingly.

The line 22 in bitcoin/src/util/readwritefile.cpp suggests that it reads the file in 128 byte portions unless it's the end of the file. Could it be the case?

At the same time I would assume that any modern OS is capable to manage this transparently and efficiently without user intervention so just for my curiousity why you think this might be the bottleneck?

I think this function is only used for two files (some private file, and some tor cookie). I'm interested in the read accesses for block data, as this causes load on the system. The way it currently works, using some defaults and OS magic, is fine - but I'd like to tweak things (and learn, and have fun) if possible. For example, if I configure a blocksize of 1 MByte with ZFS, if then bitcoind just reads (and needs) 128 byte out of this chunk, this would cause ZFS to read 8000x the requested size from disk.

I see, that makes sense. Reading in 128 byte segments would be very slow, I don't know what I was thinking. 😅

I found another clue:

/** The pre-allocation chunk size for blk?????.dat files (since 0.8) */

static const unsigned int BLOCKFILE_CHUNK_SIZE = 0x1000000; // 16 MiB

/** The pre-allocation chunk size for rev?????.dat files (since 0.8) */

static const unsigned int UNDOFILE_CHUNK_SIZE = 0x100000; // 1 MiB

/** The maximum size of a blk?????.dat file (since 0.8) */

static const unsigned int MAX_BLOCKFILE_SIZE = 0x8000000; // 128 MiB

It should be closer to the real answer I hope. It's indeed fun to find out the internals of such program like bitcoind.