bitcoin core doesn't need 600+Gb - you can run a pruning node as small as 6Gb

it also has configuration to rate limit traffic as you want it to (i learned this during the most recent bout of ridiculous spam) and it's rock stable

mine runs without a glitch on a mini pc with a UPS and a mobile network failover (my phone, which i don't take outside anyway), not seen it crash once

it does anyway require downloading that 600Gb of data just that it prunes it down

it's UI is perfectly fine, but not built for mobile, but with the use of a wireguard tunnel and one of the mobile clients that you can set up to talk to the RPC, very easy to make a mobile client front end on it as well

bitcoin core is a very mature piece of software at this stage, i wish i could say the same about most of the other implementations

Reply to this note

Please Login to reply.

Discussion

Youre really supposed have the entire blockchain when connecting an ln node though. core-lighting requires a non-pruned node, it checks for early blocks to verify it's not pruned.

In my personal experience, a bunch as a support contractor, has been horrible issues with indexes and data corruption which takes down an ln node to do maintenance.

probably cheap ass nodes running on rPis off SD card disks, which are cheap and crappy compared to an M.2 NVMe...

stupid because a HP G3 mini pc like the one i have cost $160 and the SSD cost $200 and it runs off a fork of ubuntu 22

and my backup disk is a SATA ssd with USB adapter that in total cost me about $120

if you connect them using a wireguard tunnel they don't have to be at the same site either, but probably better to do it that way

the hardest thing to arrange is the network, IMO... but the optic fibre network here on Madeira seems to be pretty solid, i currently use from a router in a separate part of the building but i bet if i had my own one installed and put it all on UPS that i would not even need the failover network when the power goes out

I only use SSD storage for high-readonly workloads. I have a pile of dead consumer SSDs many of them Samsung single layer units. I just have horrible luck with high write tasks with SSDs so I use bulk disk storage for anything with heavy writes.

A couple dozen units over about 10 years to be clear lol

well, that's how it works i guess... probably can get a lot more out of your disks by spending instead on memory, if the kernel is tweaked to avoid flushing the cache as much, and you have good power backup

Right now I switched over to truenas core as my SAN device with 128gb of memory cache, so hopefully I see good results. I have over 1TB of memory in the rack so I got plenty to spare. Truenas has consumed almost all of it.

Also I have a small datacenter worth of enterprise gear, it's meant to run services, but enterprise storage is expensive. I just don't see the point of using some junker consumer gear when I have enterprise machines too.

enterprise hardware can accept the middle level consumer grade too, just need to have redundancy, the CPU and memory is the cheap part

What's also kind of dumb to me is that when a worker thread/task crashes for whatever reason, the app still runs, so unless your manually watching logs, you can't know if the node has been degraded.

well, LND is a dumpster fire even worse than CLN

the lightning stack is really immature at this point, and there is not enough competent devs on the case to make a better fork

If you are running it in a container or on headless Linux there is no UI, and I have not found a usable desktop or web RPC ui that is still maintained. I'd love to be proven wrong on that though.