Well, if you are running Alby Wallet Connect on your own node, prepare to restart it frequently. It has a bug that hangs it due to DB lock issues. If there are people who has time on their hands and skills to look into it, please do! ๐Ÿถ๐Ÿพ๐Ÿซก

#[0]โ€‹ FYI

Iโ€™ll try to debug once I am back from fiat mines, otherwise I canโ€™t zap anyone now because of that. ๐Ÿถ๐Ÿพ๐Ÿ˜ญ๐Ÿ˜ญ๐Ÿ˜ญ

Reply to this note

Please Login to reply.

Discussion

can you describe what you mean with the DB lock issue?

It's under heavy development and I am happy to check and prioritize this.

I am trying to get logs now, give me 5 min please! ๐Ÿถ๐Ÿพ๐Ÿซก

Here are the logs! ๐Ÿถ๐Ÿพ๐Ÿซก

2023/05/16 00:49:49 /build/service.go:103 database is locked (5) (SQLITE_BUSY)

[0.355ms] [rows:0] SELECT * FROM nostr_events WHERE nostr_id = "4859dc6f74f0b43b47e4a1b8b2d04a4593b81a01461b891afc60bfc7f3aae67c"

2023/05/16 00:49:49 /build/service.go:112 database is locked (5) (SQLITE_BUSY)

[0.348ms] [rows:0] SELECT * FROM apps WHERE apps.nostr_pubkey = "71143dc6a4053f80eb86b86266bc986a7977f38612e9ed7e59f592033647d93e" ORDER BY apps.id LIMIT 1

2023/05/16 00:49:49 /build/service.go:70 database is locked (5) (SQLITE_BUSY)

[0.383ms] [rows:0] SELECT * FROM nostr_events WHERE nostr_id = "4859dc6f74f0b43b47e4a1b8b2d04a4593b81a01461b891afc60bfc7f3aae67c" ORDER BY nostr_events.id LIMIT 1

{"level":"error","msg":"database is locked (5) (SQLITE_BUSY)","time":"2023-05-16T00:49:49Z"}

{"eventId":"0a0344730654cd7d837c207166a206735871a89ec017306ac0a4078ba4e390bc","eventKind":23194,"level":"info","msg":"Processing Event","time":"2023-05-16T00:49:50Z"}

2023/05/16 00:49:50 /build/service.go:103 database is locked (5) (SQLITE_BUSY)

[0.362ms] [rows:0] SELECT * FROM nostr_events WHERE nostr_id = "0a0344730654cd7d837c207166a206735871a89ec017306ac0a4078ba4e390bc"

2023/05/16 00:49:50 /build/service.go:112 database is locked (5) (SQLITE_BUSY)

[0.416ms] [rows:0] SELECT * FROM apps WHERE apps.nostr_pubkey = "71143dc6a4053f80eb86b86266bc986a7977f38612e9ed7e59f592033647d93e" ORDER BY apps.id LIMIT 1

2023/05/16 00:49:50 /build/service.go:70 database is locked (5) (SQLITE_BUSY)

[0.334ms] [rows:0] SELECT * FROM nostr_events WHERE nostr_id = "0a0344730654cd7d837c207166a206735871a89ec017306ac0a4078ba4e390bc" ORDER BY nostr_events.id LIMIT 1

{"level":"error","msg":"database is locked (5) (SQLITE_BUSY)","time":"2023-05-16T00:49:50Z"}

that's super helpful. I see you use sqlite is there a chance you can run on PostgreSQL?

I am using the Umbrel store app so not really possible for this install. I will make it standalone after I am back from mines. ๐Ÿถ๐Ÿพ๐Ÿซก

could you share this on a GH issue?

I think we might need to process payment requests sequentially in that case - which should be fine on personal nodes.

Iโ€™ll probably even submit a patch, but cannot do it now. Only 8 hours later. Sorry! ๐Ÿถ๐Ÿพ๐Ÿซก๐Ÿซ‚

๐Ÿ˜

Ok, had a minute to do it!

๐Ÿถ๐Ÿพ๐Ÿซก https://github.com/getAlby/nostr-wallet-connect/issues/76

hehe, on which planet are you? your 8hours pass quickly :D

thanks for the issue!

PR submitted, but I didnโ€™t test yet. ๐Ÿถ๐Ÿพ๐Ÿคฃ๐Ÿ˜‚๐Ÿซก๐Ÿซ‚

https://github.com/getAlby/nostr-wallet-connect/pull/77

Seems like a race condition. It gets into that when I schedule a lot of zaps rapidly! ๐Ÿถ๐Ÿพ๐Ÿซก

This is partly why I stopped using my own node for direct zaps for now and sticking to non cust. Too many glitches in the matrix still.

Someone has to workout the kinks! ๐Ÿถ๐Ÿพ๐Ÿ˜‚๐Ÿซ‚๐Ÿ˜ญ

Hmmโ€ฆyouโ€™re saying thatโ€™s whatโ€™s been causing my node to go down every few days?

Maybe, check logs for the wallet connect! ๐Ÿถ๐Ÿพ๐Ÿซก

This explains A LOT!

I already submitted a PR that fixes it. So wait until itโ€™s merged and new image built, and Umbrel app updated. Or build your own image locally and swap it in docker-compose.yml ๐Ÿถ๐Ÿพ๐Ÿซก