Thanks. I had the AI issue again today with filling the context with words like its a mistake and so. nostr:nprofile1qyxhwumn8ghj7mn0wvhxcmmvqy28wumn8ghj7un9d3shjtnyv9kh2uewd9hsz9thwden5te0v4jx2m3wdehhxarj9ekxzmnyqyv8wumn8ghj7un9d3shjtnndehhyapwwdhkx6tpdsqzqdqz2nspr27696p9sh9lae8ervlsw4y6d3rglcqfhulvwej69ccmce99dx you had something similar, right? Watch this video ;-)

My key takeaway from the talk:

Stay out of the “dumb zone” of the context window.

- Use only a portion of the maximum tokens for active reasoning, instead of stuffing everything with logs, JSON, and random history.

- Compact frequently: summarize earlier steps into a short, sharp markdown document and start new runs from that.

- Let subagents read and search the codebase in depth.

Have them return only the relevant files, functions, and insights, so the main agent can work with a small context.

Reply to this note

Please Login to reply.

Discussion

No replies yet.