I would hook up our platform with a platoon of orchestrated agent functions and switch out the current llm calls we’re using. Then slowly onboard our fiat clients to that stream. After I ask you how you make it anonymous and which llm’s we can use ^^
Discussion
anonymous is easy, its just a random token associated with the request. In the nostr ai assistant app i’m working I plan on making it the pubkey of lmzap_nsec=sha256(user_nsec + “lmzap”)
So that it’s deterministic from your nostr identity.
Then it would just be a matter of improving the backends, adding claude, gemini, etc.