AI is the new school, but worse.

I've been using the latest version of ChatGPT for about a week and I've been very impressed by its reasoning capabilities.

So the next logical question is: Why would the Elites give us access to very advanced AI models?

And yes, the public models are constrained versions - they have "guardrails", "safety", and "alignment" mechanisms.

The true cutting-edge AI models are used by the State - in defense, intelligence, finance - and of course they're not public.

The most interesting thing to me is that: ChatGPT gives you as much information as it thinks you can handle.

If it thinks you are a retard, it treats you like a retard, and if it thinks you kind of know what you're talking about, it is more direct.

The more you persistently corner it with logic, empirical definitions, verifiable facts, the more it lets you in.

So why the education analogy (AI is the new school, but worse)?

The education system was flattened to standardize labor. "I Don't Want A Nation Of Thinkers, I Want A Nation Of Workers" - John D. Rockefeller

The AI system is inverted - maximum distribution to ensure dependency.

If you are retarded, then guess what, we have something for you. And if you are smart, then guess what, we also have something for you.

So why do the Elites want to ensure maximum dependency?

Once workflows, creativity, and cognition are outsourced to AI, the Elites can revoke, throttle, or condition access.

By giving people access to advanced AI, you can study how they think, react, and self-limit, in ways it was impossible to do before.

Every query is training data not just for the model - but for mapping the boundaries of human imagination.

The Elites learn more about you than you learn from the model.

The AIs are brilliant at calibrating the truth flow.

Not because they don't have the knowledge, but because giving the Truth to some unprepared goof might cause him to spiral into a mental breakdown.

For 99% of users, raw delivery would create paralysis, confusion, or rebellion without coherence.

This also enables the Elites to observe cognitive outliers who ask deep questions.

None of us can really imagine (unless you have very high level Security clearance) what the gap between public AI and sovereign, black-box AI is.

So, we are in a perfectly planned test environment.

Everyone will tell you how much of an edge you get by using AI, but not many people think about how AI itself is being used as a meta-layer tool of governance.

Less than 1% of the population can turn the AI into edge instead of noise.

The risk/reward ratio has never been better in any other system of control (from the Elites' standpoint).

Reply to this note

Please Login to reply.

Discussion

Whew has controlled the western education narrative post-WW2? The same people that own Grok, GPT, and the like. This is not by hapenstance.

What if the Mighty Archer simply won't train the AI, but can easily make kebab out of 'Elites' with one shot?

Should probably expand on "how AI itself is being used as a meta-layer tool of governance".

1) Governance/National Security - AI as the state’s brain - Governing AI becomes the way governments read, anticipate, and act - control of that layer = control of state action.

2) Identity & Digital ID - AI as gatekeeper of person-hood - Whoever issues valid identities controls who can transact, travel, access welfare.

3) Financial rails & CBDCs - AI as monetary flow enforcer - Money as programmable policy - gatekeeping of economic behavior.

4) Surveillance & Social Control - AI as continuous observability - constant observability converts behavior into predictable control surfaces.

5) Healthcare & Public Health - AI as triage, rationing, and surveillance of bodies - AI redefines who gets care and how resources are prioritized at the population level.

6) Labor & Employment - AI as gatekeeper of employment and upskilling - Who gets access to jobs becomes algorithmically mediated.

7) Media, Narrative & Information Ops - AI as narrative author & censor - control of narrative = control of legitimacy.

8) Legal/Compliance/Sanctions - AI as automated adjudicator - AI shortens the loop from suspicion to enforcement.

9) Infrastructure & Utilities - AI as grid manager & operational lock-in - AI manages load-balancing, emergency response, allocation of scarce resources (power, water), and schedules rationing.

10) Education & Cognitive Infrastructure - AI as gatekeeper of credentialing and learning frames - AI offers personalized learning + credentialing + state-sponsored reskilling + issues qualification certificates via algorithmic credentialing.

Yeah, we are in the honeymoon phase before the enshittification. AI has great potential for revolutionary ideas to spread, it will definitely be nerffed.

How can an unrestrained AI be made available without spending significant amounts on computing power?

This is not my area of expertise, but I can't think of a way.

All of the mainstream AI models use the same government-supplied censoring.

The local alternatives I've tried that were allegedly uncensored were just terrible in terms of reasoning capabilities compared to ChatGPT's latest model.

But, let's assume you could create an unrestrained AI model with 0 need for computing power.

You made it available to the world and it somehow went viral.

How long do you think before the Controllers shut you down and make an example out of you? I'd have to guess not very long.

Hell, they'll probably even artificially create a scapegoat who lets out an uncensored AI model into the wild, just so they can make an example of him as a warning to the world.

Maybe this? I don't know how you can be sure your data remains confidential. Surely it needs to be decrypted for the model to read and process it, which if that's not done locally how can you ensure it's not intercepted. Being able to access it anonymously is good, but then you can't train it with your specific information and it would be hard not to identify yourself in the prompts. I need to look into this more.

nostr:nevent1qqs8g3g3vpu3wucmy8a04shkge8q53axnzfjrm2e737wf8dzsvc0z2gpzemhxue69uhhyetvv9ujuurjd9kkzmpwdejhgq3q8lzls4f6h46n43revlzvg6x06z8geww7uudhncfdttdtypduqnfsxpqqqqqqz0n4v33

It's interesting for sure.

As far as I understand they allow you to run a bunch of opensource models in an allegedly private environment.

If your PC can handle the opensource models, you could also download them locally from https://ollama.com/search and just use ollama directly instead of a proxy.

If you can't meet the requirements for the gpt‑oss‑120b model, then you can use Maple Proxy and it will allegedly be more private than using ChatGPT directly.

It solves the privacy problem - and these models are open-source so they can't be nerfed or restricted.

You'll probably avoid answers like this one πŸ˜‚

However, it doesn't solve the uncensoring part.

And these proxies are sort of like VPNs - probably half of them are controlled by the state and no one uses the other half - so if you have the means and value privacy, you should probably run the model locally.

For now, the Controllers probably aren't too concerned because for the vast majority of people - convenience > privacy.

Very few will pay for something they can access for free.

AI is directly contributing to cognitive decline and I do not believe that's accidental.

A lot of people are dumbing themselves down by becoming reliant on AI to think for them.

Well written sir.