nostr:npub1gcxzte5zlkncx26j68ez60fzkvtkm9e0vrwdcvsjakxf9mu9qewqlfnj5z logcat is just sitting there going wild with this same message over and over.. i think wot.utxo.one is down, but, I guess this is someone's or multiple people's relay lists they just put ALL the wot relays in there? lmao.. but also, yeah why so much looping over this?

```

025-08-02 10:51:13.324 9045-9067 NormalizedRelayUrl app_process64 W Rejected Error wss://wot.utxo.one

wss://wot.utxo.one

wss://nostrelites.org

wss://wot.nostr.party

wss://wot.sovbit.host

wss://wot.girino.org

wss://relay.lnau.net

wss://wot.siamstr.com

wss://relay.lexingtonbitcoin.org

wss://wot.azzamo.net

wss://wot.swarmstr.com

wss://zap.watch

wss://satsage.xyz

wss://wons.calva.dev

wss://wot.zacoos.com

wss://wot.shaving.kiwi

wss://wot.tealeaf.dev

wss://wot.nostr.net

wss://relay.goodmorningbitcoin.com

wss://wot.sudocarlos.com

```

Reply to this note

Please Login to reply.

Discussion

Yeah wot.utxo.one is down. There's a lot of oversized relay lists out there too 😅

oh, i turned off the force tor for outbox. and it does seem to calm down a bit after a while. 342/909 relays 😁

There is one post that has this entire list as a single relay hint, which is wrong.

does this seem like a bug? when I do a normal close on amethyst, and then open again, tons of these rate limits on the resolver. (graphene, tor features off).

this is cool! will keep testing.

```

2025-08-02 11:09:18.459 13624-16100 Relay rela...bunker.com app_process64 W OnFailure null null Unable to resolve host "relay.nsecbunker.com": No address associated with hostname null

2025-08-02 11:09:18.460 13624-19995 Relay relay.pleb.to app_process64 W OnFailure null null Unable to resolve host "relay.pleb.to": No address associated with hostname null

2025-08-02 11:09:18.461 13624-20533 Relay wot.sovbit.host app_process64 D OnOpen (ping: 422ms)

2025-08-02 11:09:18.461 13624-18075 SHA256Pool app_process64 W Pool running low in available digests

2025-08-02 11:09:18.462 13624-16059 Relay aegis.relayted.de app_process64 D OnOpen (ping: 371ms)

2025-08-02 11:09:18.463 847-22642 resolv netd E Query from 10187 denied due to limit: 256

2025-08-02 11:09:18.463 847-22642 resolv netd E GetAddrInfoHandler::run: from UID 10187, max concurrent queries reached

2025-08-02 11:09:18.464 847-22646 resolv netd E Query from 10187 denied due to limit: 256

2025-08-02 11:09:18.464 847-22646 resolv netd E GetAddrInfoHandler::run: from UID 10187, max concurrent queries reached

2025-08-02 11:09:18.464 847-22652 resolv netd E Query from 10187 denied due to limit: 256

2025-08-02 11:09:18.464 847-22652 resolv netd E GetAddrInfoHandler::run: from UID 10187, max concurrent queries reached

```

Some of this is normal, because the close is hard it cancels all the queue of things to process. The pool one seems new to me 🤔

did some decent testing today. my overall feeling is, it's not loading my feed anymore as much as it did when I first opened it. when i double check against one of my home relays, i see posts and replies from my follows that i dont see in amethyst.

though, sometimes it does seem to load more context, eg. when viewing a specific thread.

i guess, yeah its just too much connecting, eg. the resolver and etc just not gonna work (900 relays needs to be boiled down to more like 10-20)..

i assume you already know all this I just wanted to try to help by reading the logs and trying things out and giving feedback.

🙏

Most posts are not going to the right places. So it is likely that if a client doesn't send his reply to the parent message's outbox/inbox correctly and just send to the logged user's relays, then it doesn't show up in any outbox. We will keep testing.

im seeing tons of notes now that previous to outbox, weren't there. very cool!

Looks suspiciously like the Haven sample Blastr & Import list (with a few extra relays).

https://github.com/bitvora/haven/blob/master/relays_blastr.example.json

And yes, Amethyst tends to retry writing failing events on a loop until it eventually gets rate limited. For folks blasting their notes all over the place, I can see this becoming an amplification loop, despite Haven itself not retrying when blasting notes. I already trimmed down Haven's sample list a bit, but I guess that we should reduce it to only 2 or 3 relays as a lot of people will just take the sample list as is (CC: nostr:nprofile1qqsw9n8heusyq0el9f99tveg7r0rhcu9tznatuekxt764m78ymqu36cpr3mhxue69uhhyetvv9ujucnfw33k76twwpshy6ewvdhk6tcpzdmhxue69uhhwmm59e6hg7r09ehkuef0qy2hwumn8ghj7un9d3shjtn4w3ux7tn0dejj7ne6u4e). On Amethyst side ideally it should have exponential backoff with a max number of retries and then give up.