How can you use nostr:nprofile1qqsyu43zk46umw6dthksj0jgc6xd8aeylt25w9p0psds4mp09v4qktspz4mhxue69uhkummnw3ezummcw3ezuer9wchsz9thwden5te0wfjkccte9ejxzmt4wvhxjme0qythwumn8ghj7un9d3shjtnwdaehgu3wvfskuep0r5427r to privately access top AI models and pay per query using #cashu right now ๐Ÿ‘‡๐Ÿป

1. Go to chat.routstr.com and login with nostr

2. Add my base url https://privateprovider.xyz

3. Pay an ln invoice to load a balance to your nsec

4. Select from 16 top models

5. Ask the models your questions or vibe code

๐Ÿšซ no email

๐Ÿšซ no phone number

๐Ÿšซ no fiat credit cards

๐Ÿšซ no subscriptions

๐Ÿšซ no kyc

๐Ÿšซ no logs

Use AI in a privacy and human freedom oriented way to build the future you wish to see!

Reply to this note

Please Login to reply.

Discussion

Thank you nostr:nprofile1qqsfvpc4r0g66gsxeqjhqlm2tqadntk3943k06kkym4jfg5ns7fe4tspzamhxue69uhhyetvv9ujumn0wd68ytnzv9hxgtct8xrw9 for the fat zap โšกโšกโšก

Much gratitude to you for building this.

How are the sats charged ? I mean does it charge based on number of questions you ask it ?

It is per question.

The models "cost" is based on the number of "tokens" they burn to "reason". So many AI providers charge a $$/million tokens of input and a $$/million tokens of output. That dollars per tokens is different for different models. More advanced models have higher pricing. This pricing is often obscured by AI pltdorms with various monthly subscriptions for unknown amounts of query as well.

I did my best to convert the costs I saw, into sats per query for users. The way the payments work is by loading wallet via lightning or cashu on chat.routstr.com . The cashu tokens are sent to my proxy per query, then the proxy passes the query to the ai api keys, refunds the difference in estimated costs and actual costs and gives you your chat response.

Every question is an individual cashu spend. Some queries are 1 sat while others can be 5,000+ sats depending on model and action. This is the same with fiat ai platforms that charge per query where bigger computation or long conversations become heavier total outputs.

I will be playing heavily with this over the week and tweak things into line but I need it live to see how it interacts as a proxy on both ends.