I just need something that doesn't require a smart phone, or multiple cobbled together servers with sub-optimal security design that never stay running, don't have UIs, and require 600gb of storage. I can get over that last part if bitcoin-core was a stable piece of software.
Discussion
bitcoin core doesn't need 600+Gb - you can run a pruning node as small as 6Gb
it also has configuration to rate limit traffic as you want it to (i learned this during the most recent bout of ridiculous spam) and it's rock stable
mine runs without a glitch on a mini pc with a UPS and a mobile network failover (my phone, which i don't take outside anyway), not seen it crash once
it does anyway require downloading that 600Gb of data just that it prunes it down
it's UI is perfectly fine, but not built for mobile, but with the use of a wireguard tunnel and one of the mobile clients that you can set up to talk to the RPC, very easy to make a mobile client front end on it as well
bitcoin core is a very mature piece of software at this stage, i wish i could say the same about most of the other implementations
Youre really supposed have the entire blockchain when connecting an ln node though. core-lighting requires a non-pruned node, it checks for early blocks to verify it's not pruned.
In my personal experience, a bunch as a support contractor, has been horrible issues with indexes and data corruption which takes down an ln node to do maintenance.
probably cheap ass nodes running on rPis off SD card disks, which are cheap and crappy compared to an M.2 NVMe...
stupid because a HP G3 mini pc like the one i have cost $160 and the SSD cost $200 and it runs off a fork of ubuntu 22
and my backup disk is a SATA ssd with USB adapter that in total cost me about $120
if you connect them using a wireguard tunnel they don't have to be at the same site either, but probably better to do it that way
the hardest thing to arrange is the network, IMO... but the optic fibre network here on Madeira seems to be pretty solid, i currently use from a router in a separate part of the building but i bet if i had my own one installed and put it all on UPS that i would not even need the failover network when the power goes out
I only use SSD storage for high-readonly workloads. I have a pile of dead consumer SSDs many of them Samsung single layer units. I just have horrible luck with high write tasks with SSDs so I use bulk disk storage for anything with heavy writes.
A couple dozen units over about 10 years to be clear lol
well, that's how it works i guess... probably can get a lot more out of your disks by spending instead on memory, if the kernel is tweaked to avoid flushing the cache as much, and you have good power backup
Right now I switched over to truenas core as my SAN device with 128gb of memory cache, so hopefully I see good results. I have over 1TB of memory in the rack so I got plenty to spare. Truenas has consumed almost all of it.
Also I have a small datacenter worth of enterprise gear, it's meant to run services, but enterprise storage is expensive. I just don't see the point of using some junker consumer gear when I have enterprise machines too.
enterprise hardware can accept the middle level consumer grade too, just need to have redundancy, the CPU and memory is the cheap part
What's also kind of dumb to me is that when a worker thread/task crashes for whatever reason, the app still runs, so unless your manually watching logs, you can't know if the node has been degraded.
well, LND is a dumpster fire even worse than CLN
the lightning stack is really immature at this point, and there is not enough competent devs on the case to make a better fork
If you are running it in a container or on headless Linux there is no UI, and I have not found a usable desktop or web RPC ui that is still maintained. I'd love to be proven wrong on that though.
😂 You're killing me, man. I love all that stuff. Pet software labyrinth.
lmao, I feel like most devs don't have sysadmin experience and it shows
please tell me, sysadmin knowledge for dev might be alike know the "____" for experience in real estate building ?
for example, the foundations?
I would call in-depth knowledge of computer hardware as the foundation analog. I don't know that I have a good analog to sysadmin. Its deploying and using your own product on your own hardware, following practices you'd expect other enterprises to follow.
alright then I could translate that to my trench as the very forging of habitat, more precisely its conception. a sysadmin thus conceives not only the final use of the product as well he does so for the forms and ways
i don't have a lot, never done it professionally, nearly got my Network+ back in 2016, been using unix shells since 1995 and i can write adequate scripts and dockerfiles
as a dev, i'm very interested in the structural limits of hardware though... when i was a kid i saw too many amiga demos and what those crazies could do left a big stamp on me
like, on the subject of disks, i know that badger has configurations that can reduce the rate of flushing the log and compactions, and kernels can be tweaked to delay flushing the cache to disk for this, but eventually it does have to get written
and regarding bitcoin-core and indeed btcd, their database implementations leave a lot to be desired (and they are both running ancient leveldb) and all the hype about strfry when it is basically using leveldb *with bonus memory mapped storage* it is so horribly yawn to me i can't stand it
since 2016 there has been the Wisckey paper that demonstrated that splitting key/value stores into two separate logs drastically reduced the amount of disk writes required and make it so you could engineer databases to make more use of the key fields, which are preferentially kept in memory and flushed infrequently, and this leads to dgraph, which as far as i know is the best performing graph database
i did mean to build a badger driver back in 2019 for btcd but i never got around to it... but btcd's performance is so abysmal i would never use it for production, and that's the default for LND, which to me just reinforces your point - most devs have little to no understanding of the hardware they are writing code to run on
I don't but my dad is a network engineer and my husband is an electrical engineer and used to be a sys admin. That's why we have all these old electronics. 😂 Probably why I'm so interested in relays.
I told him I need a terabyte for a full BTC node and he was like

for full nodes i highly recommend mini pc's
they don't need to be very performant, 4gb is enough even, just the disk needs to be decently fast and big - most of the data processing happens in one thread by necessity of the way the chain structure works
it's been a while since it was remotely practical to use a HDD, though probably a loud screaming 10k RPM 5.25 might almost do it
but for basics, just a 2.5" SATA SSD and a second one to dump the chain onto once in a while so if it has a failure like nostr:nprofile1qythwumn8ghj7un9d3shjtnwdaehgu3wvfskuep0qythwumn8ghj7un9d3shjtnswf5k6ctv9ehx2ap0qy2hwumn8ghj7un9d3shjtnyv9kh2uewd9hj7qgwwaehxw309ahx7uewd3hkctcpzdmhxue69uhhqatjwpkx2urpvuhx2ue0qy2hwumn8ghj7mn0wd68ytn00p68ytnyv4mz7qgawaehxw309ahx7um5wghxy6t5vdhkjmn9wgh8xmmrd9skctcqyqpk2v724perw62x6nj0m6jvrgzyrmdr3j3dnk2p0wekqpkt42l4s3v0jrr is talking about you don't have to wait forever to have it come back online
We have a NAS, but he doesn't want me messing with it. 🙄
anything less than a 10Gbit network connection to the disk is not gonna be much fun, 5Gbit is adequate
My cluster gets by with a 4gig lacp across 3 nodes, haven't really noticed any performance degradation for what I do. I guess caches make the most of it.
people say cln doesn't support pruned nodes but that could just be boogle not letting us search for stuff in the last 5 years cause it seems like both cln and lnd support pruned nodes. a lightning head would know more i can vouch for lnd at least, and litd. ⚡
The only way I got cln to work was with a full node. I think there is a way to prune it after a while, but if I already need 600gb of space to get is started I guess Ill just keep it. When cln starts up it requests genesis block forward and if it's not there it fails and logs an errors something like "missing block data, cln doesn't support pruned nodes"
My old laptop only had 2GB of RAM. 😂 Wimpier than a π
You need a 2TB lol
Leave me alone to grieve. 😭

nope
You can run a pruned node with less.
A full node is already 1TB with electrum.
A full node with fulcrum needs 2TB going forward.
is fulcrum something CATman is talking about?
Fulcrum is an electrum server that does rapid indexing, which is great for deep wallets.
can't be very rapid if it uses 1400Gb of space
It doesn't. A full node with electrum is currently 1TB. A full node with fulcrum is over 1TB now. So a 2TB SSD is a good idea for long term future use ( epsecially if you want to use a faster lookup like fulcrum ) so you don't have to download a ibd again.
i just use xrdp and directly use the bitcoin core GUI, and the only thing i imagine happening next might be running CLN on it
but the chain does grow like 100mb/day so 10 days = 1 gb and 350 left so 3500 days... ok, hah my node will be good until the hardware fails probably - and yes i have address and tx indexes enabled, uses 680gb
my mini pc was 140 euros and has 8gb memory so it's overkill hardware really, i just envisioned maybe i might run LN and one or two other things on it, i had imageproxy running on it for a while