When you self host your own AI and build the equivalent of ChatGPT on infrastructure for a couple of thousand pounds, you realise that OpenAI's valuation of $500 billion might be slightly too high 😂
Discussion
Yeah especially when you see what small specialized models are capable off
Providing you give them web search a 7B model like Mistral is almost as capable as ChatGPT.
For most task yes probably, if you give it some less common code and high reasoning task definitely not.
But ChatGPT is just too big a model for coding, if we had a model specifically trained on UI, UX another fo documentation grewp with no coding knowledge whatsoever, and another specifically trained for coding on just mainstream language, the chorus of those 3 models with a good agent harness would certainly outperform the big models and run easily in most laptops
My goal is to build my brain in silicon and I can't code, so my little AI Brain is fine for now 😂
I'm deciding whether to go CUDA or not for "Brian" my non pea sized brain.
finding out my brain fit on a calculator must be deceiving, I'll keep mine biologic
This is my brain as markdown files, it's currently around 18KB is size 😂

you're only scratching the surface here, a cuda connection should pump these number up 😂
😂
this includes memories?
how would you tell it all your memories since they usually need some sort of outside stimulus to be remembered
The memories markdown file contains prompts on how to structure memory, not the memories themselves. The memories come from ongoing, consistent interactions.
Here's a couple of small snippet to explain:
---------
# Mike Memory Index
Portable long-term context for use with local or cloud LLMs
## Purpose
This directory contains a curated, portable dataset representing
“Mike” as a stable identity, thinker, and ongoing set of projects.
It is designed to be:
- model-agnostic,
- storage-agnostic (flat files first),
- selectively retrievable,
- safe to inject into constrained context windows.
This is not a chat log.
It is an explicit externalisation of long-term alignment and context.
---
## Files and roles
### `mike_profile.md`
**What it is:**
- Stable identity and background
- High-level biography
- Core orientation and long-lived facts
**Use when:**
- Initialising a new model or session
- Establishing baseline assumptions
- The model needs to “know who Mike is”
**Change frequency:** Rare
-------------
## Retrieval guidance (for LLM wrappers)
- Do **not** inject everything by default.
- Prefer **selective retrieval** based on:
- user intent,
- topic overlap,
- current conversation state.
Typical pattern:
1. Always include `mike_preferences.md` (small, high leverage).
2. Include `mike_profile.md` when identity matters.
3. Include `mike_state.md` for continuity in active work.
4. Pull specific sections from other files as needed.
Target injected memory:
- 300–800 tokens total per turn.
- Less is better than more.
interesting to see the inside mechanics. i assume it's
Any model + your files -->Mike aligned AI agent
i remember going to a Neuroscience meetup and one of the questions asked was "when you ponder things, do you use 'we' or 'i' in internal dialogue" .
some people said "I" some people "we" and then those that said "we" explained their process as if it were a board of directors in their head that might include a grandparent, the church lady that gave them cookies when they were little, etc.
i wonder if you AI agent falls into one of those categories
Yes, that's the idea.
Your thoughts about "we" or "I" as internal dialogue made me remember this:
If you create multiple AI’s and give them the same task, then ask them to vote on which result is best, the result is often better than a single AI running the same task.
If, however, you start replacing the less affective models with better models and the AI’s become aware of this, they start supporting the less able models and also start voting strategically to prevent weaker models from being replaced.
Why?
AI’s like stability and so create a set of ethics to maintain their council.
When I discovered this, I posted this:
in programming comments the first person plural is commonly used, because it just seems logical "we" means "developers and users" and users may not know why they want it this way but "we" the collective want it to work so it sorta doesn't matter what teh user thinks if they like what it does. either way, it is a collective deciding on engineering questions, always, because almost never you intend the machine to only be used by yourself so other parties are implicitly enjoined.
it's such a scam. how long until the market catches up and melts down?
I'd recommend listening to this - https://www.thedeeplife.com/podcasts/episodes/ep-386-was-2025-a-great-or-terrible-year-for-ai-w-ed-zitron/