Anyone know of anyone else working on AI Agent token-by-token text response streaming in a nostr-native way?
Discussion
We’re building nostr:npub130mznv74rxs032peqym6g3wqavh472623mt3z5w73xq9r6qqdufs7ql29s. A decentralised AI gateway paid for using Cashu/lightning on per request basis.
I’ve also been tinkering with nostr:npub1dvmcpmefwtnn6dctsj3728n64xhrf06p9yude77echmrkgs5zmyqw33jdm and have a Routstr CVM here: https://www.contextvm.org/s/62c2b22b295e211dc354cae7f14b3fba22ade9e9bcc037cead16782493380b9b
This is a Nostr native way of interacting with AI. Ideally this should be integrated with Goose or similar CLI agents for a Nostr native AI agent that runs/pays for itself on a per request basis.
Our context vm doesn’t stream text yet. There’s a fix for that I’m working on.
Thanks I'll take a look!
Hmu when you wanna do text streams over nostro. I've been getting back to the git workflows over nostr thing and I want to stream logs. There's probably great overlap
Yes pls!! Will hit you up. We’re doing a couple releases this week. Soon after that.
How are you doing this? I feel curious, I've already done some testing on streams over CVM, and it works out of the box using MCP notifications or resource subscription, as well as with the possibility of bidirectional communications between the server and client
I did one attempt with nostr:nprofile1qqs2qzx779ted7af5rt04vzw3l2hpzfgtk0a2pw6t2plaz4d2734vngpz4mhxue69uhk2er9dchxummnw3ezumrpdejqz8rhwden5te0dpshvetw9ejxzmnrdah8wctev3jhvtnrdaks6xq4qp when we did the DVM based version. We batched every 5s or x amount of lines to send a new event with the output.
I think some configurability here would be useful. If you really need extreme throughput it probably shouldn't go in nostr messages and use some kind of specialized server.
We're looking at some kind of stream-now-post-later approach to at lest keep the cryptographic audit trail intact, have you come across anyone doing something along those lines?
I don't understand what that approach consists of. Can you share more details?
I already do the post-latet thing by uploading logs to blossom and sign a completion message referring to the blob.
Gotcha. What about for each response after the stop condition convert to a kind 30023 with markdown, sign and post to relay? Anything like that out there in the wild?
In my case it won't fit in a single event. But yes if it will fit you can do that.
I'm not aware of any existing solutions that do this
The approach we could take with CVM is to leverage the native notification system that MCP has built in for asynchronous and long-running jobs, which I believe fits this use case. MCP also has the concept of resource subscription, allowing you to subscribe to changes in a specific asset exposed by the server. For example, you can subscribe to 'hello.txt', and if the document changes, you receive a notification indicating that the resource has changed. However, for this specific case, I think regular notifications would suffice.
Here's how it would work: If the CI runner is behind a CVM server, the git client or any authenticated user can send a call to a tool named 'run_ci' (for example). Inside the json rpc object representing this call there is a 'progresstoken' value set by the client calling the tool, which the server then uses for correlation. Once the tool is called, the server streams back progress notifications, embedding the same 'progresstoken' defined by the client. This is how correlation works in MCP. The server can then send all the notifications generated during the process and end with a final response, which is the result of the runner.
We have been experimenting with this stream-based approach, and it already works out of the box in CVM. It is also bidirectional, allowing the client and server to exchange messages within the context of a tool call execution. This effectively enables reconciliation protocols with 'need' and 'want' states, similar to what Git or even negentropy does under the hood.