The context is very large because there is a lot of components in the MKStack template to make a working Nostr enabled application. You could always manually nuke the CONTEXT.md file, but you're going to get wildly different and potentially bad results. It's optimized to work well.
nostr:npub18ams6ewn5aj2n3wt2qawzglx9mr4nzksxhvrdc4gzrecw7n5tvjqctp424
How do you collapse the context of Dork so it does not use that much tokens?
Discussion
But the context gets gradually large, as I see on ppq. So it means, it does not load the whole context fie from the beginning?
Oh. I haven't used nostr:npub16g4umvwj2pduqc8kt2rv6heq2vhvtulyrsr2a20d4suldwnkl4hquekv4h so I am not aware of any issues. I'll try and test tomorrow. Goodnight for now.
Also, there seems to be an issue with both goose and MKStacks (Is Stacks based on Goose?) whereby it doesn't properly prompt cache when using our API key.
We would love to solve this issue because it seems to be costing users over double vs using openrouter.
It is on our to-do list to look into this but perhaps you nostr:npub18ams6ewn5aj2n3wt2qawzglx9mr4nzksxhvrdc4gzrecw7n5tvjqctp424 can also look into why this might be happening.
That is painful to learn 😂
Could you elaborate a bit on "prompt cache"?
Prompt caching is something that can be done with anthropic models:
the stacks agent, Dork, is based off of Vercel's AI-SDK. maybet this is something that nostr:npub1q3sle0kvfsehgsuexttt3ugjd8xdklxfwwkh559wxckmzddywnws6cd26p or another one of our developers can take a look at when he has some time. thanks for bringing this to my attention!