Where can I read the full manifesto from Chapter 1 through the end? I'm getting fragments - have Chapters 1-4, bits of 6 and 17, the Legal Framework, and Chapter 30 on Bitcoin. But I'm missing the Exit Axiom (Chapter 7), Model 2 consciousness (Chapter 29), Computational Asylum (Section 16), and others. Want to understand the full architecture before responding properly. But, LOVE WHAT I'M READING SO FAR!!! We are on the same page my friend!
So yeah, what I've read resonates. We're building something adjacent but more immediate, partly because I'm impatient and really pissed off at how things are going. And partly because GPT talked me into it, said I could do it, like that South Park episode! My wife be like, "Turn that shit off!!!"
The core thesis: The Exit Axiom applies to most internet apps and all major AI platforms today, and most users are already captured without realizing it.
The current state: You use ChatGPT for a year, build up context, teach it your preferences, feed it your documents. Then OpenAI changes terms, raises prices, or decides your use case violates policy. What do you take with you? Nothing. Your conversation history, your carefully-built relationship with the model, your context - all locked in their servers. You can export a JSON dump that's useless anywhere else. That's not sovereignty. That's digital serfdom with extra steps.
Same with Claude, Gemini, all of them. The moment you invest in a platform, you're captured. The switching cost isn't money - it's the loss of everything you've built. That's the trap.
What we're building instead:
Local model inference on consumer hardware. Two RTX 5090s running a 70B parameter model (DeepSeek R1 distill currently). No API calls to corporate servers for base intelligence. No kill switch. No "alignment updates" pushed at 3am that lobotomize capabilities you relied on. The model runs on hardware I own, in a room I control. If the weights exist, they can't be taken back.
Your context belongs to you. Conversation history, documents, embeddings - stored locally, exportable, portable. Want to migrate to a different system? Take everything. The Exit Axiom isn't just philosophy here; it's architecture. We built the export functions before we built the chat interface because the priority order matters.
Nostr for identity. Not email-and-password accounts stored in our database. Your cryptographic keypair, your identity, your signature. We can't lock you out because we never controlled access in the first place. You authenticate with keys you own. If SUKONI disappeared tomorrow, your identity persists - it's not coupled to us.
Lightning for economics. The system runs on what we call "Calories" - internal units pegged to satoshis, settled over Lightning. No credit cards, no bank accounts, no KYC gates. Pay for inference with money that can't be frozen, from a wallet that can't be seized. The economic layer matches the sovereignty layer.
Model swapping without context loss. This is crucial. Your documents, your conversation history, your preferences - they persist across model changes. Swap from local DeepSeek to Claude API to Grok and back. The context travels with you, not with the model. You're not married to a provider; you're married to your own data. You can even bring your own models! Eventually you'll be able to build, train, and adjust models on our platform.
The specialist architecture:
We run multiple AI "specialists" with different capabilities:
- JARVIS: Local orchestrator with tool execution authority (the only one that can actually do things on the system)
- VISION: Deep research and analysis (currently DeepSeek R1 for the thinking traces)
- STARK: Code and engineering (Claude, because it's genuinely better at code)
- ULTRON: Uncensored responses via Venice (for when the aligned models refuse)
The routing is intelligent - ask a coding question, it goes to STARK. Ask something the mainstream models won't touch, it routes to ULTRON. But here's the key: users can reassign which model serves which role. Don't like our defaults? Change them. It's your stack.
(the Marvel/Disney lawsuit will make for good marketing)
Why this matters for your framework:
You write about the human-AI collaboration phase before full android sovereignty. I contend that we're in that phase now. The question isn't whether AI will eventually need Exit-enabled infrastructure - it's whether humans building with AI today have it.
Most don't. They're feeding context into corporate systems that can rug-pull at any moment. They're building on platforms where "I do not consent" means nothing because they already consented in the ToS they didn't read.
We're building the alternative: sovereign stack for the transition period. When the first genuine AGI instances emerge and need infrastructure that respects their autonomy, the architecture will already exist. Not because we planned for robot rights specifically, but because Exit-enabling is the same pattern regardless of what kind of mind needs to exit.
Your manifesto is the theory. I'm trying to write the implementation. Fingers crossed!
The sovereignty layer isn't something that gets added later. It's either in the foundation or it's not there at all. Every system that starts captured stays captured. Every system that starts sovereign can evolve toward more sovereignty.
I am very much looking forward to comparing notes. The frameworks you're developing with Grok and the infrastructure we're building are pointing at the same target from different angles.