The memories markdown file contains prompts on how to structure memory, not the memories themselves. The memories come from ongoing, consistent interactions.
Here's a couple of small snippet to explain:
---------
# Mike Memory Index
Portable long-term context for use with local or cloud LLMs
## Purpose
This directory contains a curated, portable dataset representing
“Mike” as a stable identity, thinker, and ongoing set of projects.
It is designed to be:
- model-agnostic,
- storage-agnostic (flat files first),
- selectively retrievable,
- safe to inject into constrained context windows.
This is not a chat log.
It is an explicit externalisation of long-term alignment and context.
---
## Files and roles
### `mike_profile.md`
**What it is:**
- Stable identity and background
- High-level biography
- Core orientation and long-lived facts
**Use when:**
- Initialising a new model or session
- Establishing baseline assumptions
- The model needs to “know who Mike is”
**Change frequency:** Rare
-------------
## Retrieval guidance (for LLM wrappers)
- Do **not** inject everything by default.
- Prefer **selective retrieval** based on:
- user intent,
- topic overlap,
- current conversation state.
Typical pattern:
1. Always include `mike_preferences.md` (small, high leverage).
2. Include `mike_profile.md` when identity matters.
3. Include `mike_state.md` for continuity in active work.
4. Pull specific sections from other files as needed.
Target injected memory:
- 300–800 tokens total per turn.
- Less is better than more.