disgusting, all blocks filled to the brim.

they are just trying kill the bitcoin as fast a possible. 24/7 congestion, bigger than 1MB blocks. i wonder if you can even follow bitcoin in 2G internet in africa rn. this is like 260MB per fucking day. TWO HUNDRED AND SIXTY MEGABYTE. almost 2GB per week. at this point we need consistent sub 500KB blocks to undo this damage.

cant wait to continue building my own node that will delay broadcast of all these garbage blocks. and try to compress this mess as much as possible. garbage data that doesn't mean anything blockchain is meant for verifying stuff, not storing stuff, this is so dumb. in which world you store blob data on blockchain. like if you were designing a system, this would make 0 sense. storing the blob data instead of hash.

i will have a builtin mining endpoint too so i might add an interface thats is letting you pick a max block size for your blocks. it can have presets like 1MB "Satoshi" profile, 400KB "Bitcoin Maxi" profile with laser eyes icon.

and 4MB "No Limit" profile grayed out at the bottom.

heck even delay the broadcast of blocks bigger than your limit.

i can have estimate data transfer per day/week/month somewhere, so you feel the network. it should also have a live blockchain size on the main page. which is the most important thing about bitcoin decentralization.

Reply to this note

Please Login to reply.

Discussion

i think block delaying is important for bitcoin nodes to have more say on the network. it should be a thing fast.

idk i dont value my ideas that much. and kinda afraid of the responses i might get. so i dont go tell people to implement it.

i just make it myself and see what happens. that's why trying to make my own node from scratch. but wont make the repo public until i can see it actually works in practice.

but yeah if core or knots implemented something that delays broadcast of blocks if they don't fit into your criteria of a "healthy block" then that would be useful, i believe.

Could be useful but it could have unforeseen consequences too

i mean worse i can imagine is blockchain forking.

BUT i mean we dont reject the blocks. we even add it to our blockchain.

we just delay the broadcast to others.

and everybody picks the longest chain anyway. so i dont see any issues.

so it just gives other miners to more time to find a block if most of the network is delaying the broadcast.

of course each node still has their own criteria making it more like a network effect. kinda like trash miners have bad internet connection.

i think its a power nodes need.

That could be the next step _after_ we decentralise mining

Delaying blocks: not effective with mining centralisation.

Compression: actually makes the problem worse (more CPU time)

Custom mining templates: OCEAN already gives miners full control

thank you for replying.

i was talking about logical compression, for example there are too much repeating data on the blockchain. many things are pointed to each-other by hash (eg. txid), but you dont have to store all of the hash, you can give internal ids to things which are shorter. and while indexing by hash you dont need the full hash, you just need the shortest unique prefix of it. and since while searching by hash you give the full hash, you can re-validate it. so you dont have to store the full hash, and repeat it inside outputs. also you can give scriptPubKey(s) internal ids as well, so you dont repeat them if they are repeating on the blockchain.

also there are many compression methods that are designed to be fast. not to mention you can cache data for block ranges being accessed frequently. i can only compress the older blocks. and since compression is done on older blocks only, i dont have to worry about handling blockchain fork while giving internal ids. and many other compression techniques i cant think of atm.

if you don't add pruning, and have indexing enabled always, it takes less space than its being optional and enabled. since you don't have to repeat the data on the blockchain and the index separately in many cases.

also, im talking about a node designed for the end user, used by them only. so i can sacrifice some cpu time for, disk space. i think disk space is the biggest issue while running a full node as a normal person. occupied disk space is permanent. and bothers people more.

idk maybe im wrong, i will see when i try to run it with everything else i didnt mention here. worse case i learn stuff. but i will try to push for smaller blockchain as much as i can.