Useful #AI/LLM techniques:
1. Periodically check for context memory usage to warn against overflow that will result in wiping the conversation.
2. Ask for an export of context in the format of a prompt input in order to recreate the conversation, then save that to your own notes to prevent losing progress.
3. Build your own CLI/interactive commands in order to allow for data programs that require lots of crunching /logic. Use separate conversations to jam on sub-logic to avoid overflowing primary context.
