Replying to Avatar PayPerQ

There's been some buzz in the last two days around LLM API's runnning pay-per-query via lightning payments.

As the creator of an AI service that prioritizes lightning, I wanted to share my experience and also learn a bit from the audience on this matter.

The ultimate dream we all have in the LN community is for each and every query (inference) to be paid for with the requisite amount of satoshis. That way, the user never has to keep a balance with the service and suffer from the host of corresponding inconveniences that arise from that.

When I originally built PPQ, I tried to implement exactly this feature. But when I actually got to doing this, I realized it was pretty hard:

First, generative AI queries are unpredictable in their cost. When a user sends a request, the cost of that request is generally not known until the output has finished streaming.

Second, even if one decided on some sort of fixed pricing per query, the latency to finish a lightning payment costs precious milliseconds and reduces the snappiness of the end product. I don't want to have a user wait an additional 1 second each time for the payment to clear before getting their answer.

To address this, my best idea was to charge an "extra amount" on the user's first query. That way, my service would store a de facto extra balance on behalf of the user. When the user submits their subsequent queries, the system could draw down on this "micro balance" instantly so that it didn't need to wait for the subsequent payment to clear. This micro balance would also serve to mitigate any issues where the user's output was higher than expected. So each subsequent query would always be drawing down on that micro balance and then the users realtime payments are not paying for the query, they are rathing paying to top up that micro balance over and over again.

However, even this method has some weaknesses to it. How much extra money should that first query be? Theoretically the micro balance needs to be as large as the largest possible cost that a query could be. If it wasn't that size, the service makes itself vulnerable to an attack where the users consistently write queries that exceed the amount of money in their microbalances. But the maximum cost of a gen AI query can actually be pretty large nowadays, esp with certain models. So the user's first query would always have a weird "sticker shock" attached to it where they are paying $1-2 for their first query. It creates confusion.

Aside from these problems, the other big problem is that the lightning consumer ecosystem of wallets and exchanges largely do not yet support streaming payments. The only one that does to my knowledge is @getAlby with their "budgeted payments" function in their browser extension.

So even if you were to build a service that could theoretically accept payments on a per query basis, the rest of the consumer facing ecosystem is not yet equipped to actually stream these payments.

In the end, I just adopted a boring old "top up your account" schema where users can come to the website and deposit chunks of money at a time and then draw down upon that balance slowly over time. While boring, it works just fine for now.

I woud like to hear from the community on this issue. Am I missing something? Is there a better way to tackle this? Maybe ecash has a cool solution to this?

nostr:nprofile1qyt8wumn8ghj7etyv4hzumn0wd68ytnvv9hxgtcpzemhxue69uhks6tnwshxummnw3ezumrpdejz7qpq2rv5lskctqxxs2c8rf2zlzc7xx3qpvzs3w4etgemauy9thegr43sugh36r nostr:nprofile1qyxhwumn8ghj7mn0wvhxcmmvqyehwumn8ghj7mnhvvh8qunfd4skctnwv46z7ctewe4xcetfd3khsvrpdsmk5vnsw96rydr3v4jrz73hvyu8xqpqsg6plzptd64u62a878hep2kev88swjh3tw00gjsfl8f237lmu63q8dzj6n nostr:nprofile1qyxhwumn8ghj7mn0wvhxcmmvqydhwumn8ghj7mn0wd68ytnzd96xxmmfdecxcetzwvhxgegqyz9lv2dn65v6p79g8yqn0fz9cr4j7hetf28dwy23m6ycq50gqph3xc9yvfs

Regarding latency, lightning payments have to find the routing, which might fail, so it's too slow.

Maybe CASHU will be quicker, because you just require a stream of strings. This has the problem that the mint might rug people, of course.

Maybe what you can do is just to have your own mint, which the user charges with a reasonable buffer, and you stream from the tokens the user created with you.

This probably solves the speed problem, but not the buffer problem.

To minimize buffet maybe some cashu capability has to be developed so the sender authorized streams of a maximum amount, etc

nostr:nprofile1qqs9pk20ctv9srrg9vr354p03v0rrgsqkpggh2u45va77zz4mu5p6ccpzemhxue69uhk2er9dchxummnw3ezumrpdejz7qgkwaehxw309a5xjum59ehx7um5wghxcctwvshszrnhwden5te0dehhxtnvdakz7qrxnfk maybe you have better insights

Reply to this note

Please Login to reply.

Discussion

No replies yet.