Useful #AI/LLM techniques:

1. Periodically check for context memory usage to warn against overflow that will result in wiping the conversation.

2. Ask for an export of context in the format of a prompt input in order to recreate the conversation, then save that to your own notes to prevent losing progress.

3. Build your own CLI/interactive commands in order to allow for data programs that require lots of crunching /logic. Use separate conversations to jam on sub-logic to avoid overflowing primary context.

Reply to this note

Please Login to reply.

Discussion

Ability to export and import your program between conversations (new and independent memory spaces) and even between LLMs is an elite skill.

#AI #LLMs

nostr:nevent1qqs0v3hzry4r2prapugt7lkhvf4mpqq469pnqxrhzexrxle27m5szegzyq6ksa0l6u5mqmhtfswh5u9p7agqghgxwa6dy8q04lly4u4lj63wsqcyqqqqqqg99hcwm

Steve Yegge’s Beads has allowed me to tackle increasingly complex PRs

Thanks nostr:npub19a86gzxctwtz68l8zld2u9y2fjvyyj4juyx8m5geylssrmfj27eqs22ckt

Steve Yegge's Beads?

It’s software, not masonry

😂