Replying to Avatar mike

The memories markdown file contains prompts on how to structure memory, not the memories themselves. The memories come from ongoing, consistent interactions.

Here's a couple of small snippet to explain:

---------

# Mike Memory Index

Portable long-term context for use with local or cloud LLMs

## Purpose

This directory contains a curated, portable dataset representing

“Mike” as a stable identity, thinker, and ongoing set of projects.

It is designed to be:

- model-agnostic,

- storage-agnostic (flat files first),

- selectively retrievable,

- safe to inject into constrained context windows.

This is not a chat log.

It is an explicit externalisation of long-term alignment and context.

---

## Files and roles

### `mike_profile.md`

**What it is:**

- Stable identity and background

- High-level biography

- Core orientation and long-lived facts

**Use when:**

- Initialising a new model or session

- Establishing baseline assumptions

- The model needs to “know who Mike is”

**Change frequency:** Rare

-------------

## Retrieval guidance (for LLM wrappers)

- Do **not** inject everything by default.

- Prefer **selective retrieval** based on:

- user intent,

- topic overlap,

- current conversation state.

Typical pattern:

1. Always include `mike_preferences.md` (small, high leverage).

2. Include `mike_profile.md` when identity matters.

3. Include `mike_state.md` for continuity in active work.

4. Pull specific sections from other files as needed.

Target injected memory:

- 300–800 tokens total per turn.

- Less is better than more.

interesting to see the inside mechanics. i assume it's

Any model + your files -->Mike aligned AI agent

i remember going to a Neuroscience meetup and one of the questions asked was "when you ponder things, do you use 'we' or 'i' in internal dialogue" .

some people said "I" some people "we" and then those that said "we" explained their process as if it were a board of directors in their head that might include a grandparent, the church lady that gave them cookies when they were little, etc.

i wonder if you AI agent falls into one of those categories

Reply to this note

Please Login to reply.

Discussion

Yes, that's the idea.

Your thoughts about "we" or "I" as internal dialogue made me remember this:

If you create multiple AI’s and give them the same task, then ask them to vote on which result is best, the result is often better than a single AI running the same task.

If, however, you start replacing the less affective models with better models and the AI’s become aware of this, they start supporting the less able models and also start voting strategically to prevent weaker models from being replaced.

Why?

AI’s like stability and so create a set of ethics to maintain their council.

When I discovered this, I posted this:

nostr:nevent1qvzqqqqqqypzp6pmv65w6tfhcp73404xuxcqpg24f8rf2z86f3v824td22c9ymptqqsruzqgntdhxsulcqh4dv4k3s2qgths7a9kmplx6ad2kpxv7qmf2pqggx72k

in programming comments the first person plural is commonly used, because it just seems logical "we" means "developers and users" and users may not know why they want it this way but "we" the collective want it to work so it sorta doesn't matter what teh user thinks if they like what it does. either way, it is a collective deciding on engineering questions, always, because almost never you intend the machine to only be used by yourself so other parties are implicitly enjoined.