yeah, for this spider stuff, fetching events for whitelisted users on the relay, and for bulk import, there is some serious challenges with not having memory blow up.
Discussion
One thing that comes to mind is a project like TigerBeetle choosing Zig for deterministic, explicit memory control (no GC surprises), but their use case (a financial database) is much more predictable. For your relay's open-ended datasets, careful pipelining is probably the main solution, regardless of the language.
Are you experiencing real GC pain points or just the challenges of processing large data streams?