could you share your prompt?

sometimes when things loop like this, it's best to start a new chat and start over.

Reply to this note

Please Login to reply.

Discussion

The exact prompt was: lightning payments are not working. After a user pays the 21sats, the note scheduler page should come up for them to use.

I should not have to start all over after a product/ service I pay for gets stuck in a money consuming loop. I didn't have to restart chats in deepSeek yesterday even with 1800+ lines of code in replies....

The product ot service just shouldn't get stuck in a money consumption loops...

The ai thinking, shouldn't be a charge.

thanks for the prompt.

you can see deepseek with shakespeare if you feel it's better than sonnet.

if it's spending tokens, it's a charge. that's not designated by us, that's designated by the ai provider.

do you have a fully working version of this app that you built with deepseek?

I did my original note scheduler 100% free with deepSeek and chatGPT. All was working until something changed for LN o er the course of 5mo and now nothing I've made using it works.

Like my rpg game with in game items you could by for sats.

Everything works, except for using LN to buy said items.

looking back at your screenshot, this seems odd to me because Claude isn't a thinking model. this looks to me like it was GLM 4.6. we've seen this issue with GLM before, it's why we stopped using it as our own branded models and it's why i've stopped using it entirely. are you use you didn't originally try GLM because it was cheaper and then switch to Claude down below in the model selector. and did you try Claude after you switched to the more expensive model?

It could have been, I'm not 100% sure. It was my 2nd or 3rd project I tried. Others after this have been about the same but with "viewed" instead of "thinking".

I just went to Shakespeare.diy to see what model is used as my default;

you can change that and it just remembers the last one you selected.

i could be wrong but im wondering if what im guessing happend is what happened based on what im seeing on screen with the . it just can't happen with Claude since it's not a thinking model and i've seen this output before with GLM.

either way, i'll take this into consideration becuase we do want shakespeare to be the best it can be for non-devs. it's why we built it.

Could the MCP part, (having the Ai go over nostr docs basically) be the cause of excessive token usage passed onto customers?

i don't believe so. it's the system prompt. it's massive. it's what teaches AI how to build nostr properly, amongst other things.

we actually are working on skills, which will make the overall system prompt much leaner, which will in turn make building applications cheaper overall as you'd like use a skill (additional prompt, code, context, etc.) when you needed it.