There was a big AI track last year, obviously it’s a hot topic. Are you talking about decentralized compute for training or for actually executing LLMs, or both?
Discussion
Both. And in regards to Nostr, decentralized infra for relays. I experimented a proof of concept https://github.com/osamja/gpu-nostr-relay/tree/production-ready-cuda