What kind of specs is it eating? I self host, but I am curious what the scale sizing looks like on your end
Discussion
AI moderation and the relay are the two main components that generate the highest CPU usage in the infrastructure (16 GB RAM, 8 vCores, 5 million events, 400,000 hosted files, ~4,000 nostr addresses).
Does it support gpu acceleration for the ai tasks and can that be pointed to a separate endpoint or does it need to be local?
Those specs are not too outrageous.