Possibly yea.

Additionally, there are ways that one can predict the cost of a query based on the input. If those methods are accurate, it could serve to make the buffer amount they require to be more dynamic. Basic questions like the one you asked above might demand a much lower buffer than more advanced questions.

But I didn't look too deeply into that stuff. I just wanted to ship AI products more and figured that the top up method was "good enough" for the time being.

May revisit in the near future though. Excited to see what routstr might come up with. And always happy to ideate on collabing with routstr in some way.

Reply to this note

Please Login to reply.

Discussion

I joined routstr team recently.

Did myself worked on such solution

https://github.com/9qeklajc/ecash-402-client

Where u pay ecash per request.

This method ist for sure as u mentioned it, not perfekt, but its a decision the provider make and risk loses ..

Wrote also some ideas on the topic, about a smart client and dynamic provider.

https://github.com/ecash-402/ecash-402-specs

Feedback is welcome

Hmm I wonder if PPQ could adopt this once you polished it up a bit? The choice between topping up and paying per query would still be up to your users but seems like this option does have some possible advantages.

It would also be really cool to see an official doc written up on potential advantages of this model vs topping up. Then just show that in the readme of this repo.

Yeah I'm currently cleaning up and improving some stuff, but basically the code is about the client that the user runs locally and hook up to any ai driven software that use the openai api protocol (like. Goose, roocode, cline, etc.).

will soon release the client with more docs and features..

Ppq or other provider should only implement the interface based on the provider-cliemt spec which as u mentioned still not well documented yet (working on it 😉)