I think we agree on more than we disagree. Have you seen any scenarios of lightning loss that wasn't a result giving a software layer built on top of lightning permission to perform some form of automated fund management?

Reply to this note

Please Login to reply.

Discussion

Yes.

I have seen people get rugged by their hosting company.

I have seen data get corrupted and static backups fail.

I have seen force closes cost thousands of dollars.

In addition to all the devs building on top that has bugs in their own software.

I'm sure I'll see more too.

Thanks. Do you have general guidelines on node operation? It seems like some combination of trust, bad code in the application ecosystem, and human error are the major problems. Force closures, however, seem to be something that stem from issues with interoperability and robustness in major node implementations.

In general I would say keep small amounts on there. I wouldn't walk around the street with 150k in my pocket, and I would treat ln the same.

Dont host in the cloud, always update as soon as patches are released, only connect with peers that are reputable

And don't be too emotionally attached to lightning, it's not clear that it's a winning idea. It has many problems and mostly only works well with custodians or very technical users.

It is *highly* technical and I have found it leaves a lot of problems up to the operator to resolve. Even if there is a safety feature, just getting it running can be administratively tricky. I *wanted* remote pgsql backups on cln, but the initial sync was always failing like 1 minute in. Turns out systemd was like "process hasnt responded in a minute, better just stomp it and restart".