Avatar
No Solutions
9bde421491f3ead1ac21bd1d01667aab947f4c1c4aed87624cf2273b06ca052b
No solutions, only trade-offs. Walking towards a better internet since 882,690. Inspired by #SovEng

Dear Nostr,

there are no solutions, only trade-offs. But we can still have lots of fun along the way.

Replying to Avatar hodlbod

So I was listening to nostr:nprofile1qyt8wumn8ghj7etyv4hzumn0wd68ytnvv9hxgtcpz4mhxue69uhkummnw3ezummcw3ezuer9wchszxthwden5te0wpex2mtfw4kjuurjd9kkzmpwdejhgtcprpmhxue69uhhyetvv9ujuer9wfnkjemf9e3k7mf0qyd8wumn8ghj7ur4wfshv6tyvyhxummnw3ezumrpdejz7qpqdergggklka99wwrs92yz8wdjs952h2ux2ha2ed598ngwu9w7a6fsce9rzs and nostr:nprofile1qy2hwumn8ghj7un9d3shjtnyv9kh2uewd9hj7qg3waehxw309ahx7um5wgh8w6twv5hszxnhwden5te0wpuhyctdd9jzuenfv96x5ctx9e3k7mf0qyghwumn8ghj7mn0wd68ytnvv9hxgtcqyqnxs90qeyssm73jf3kt5dtnk997ujw6ggy6j3t0jjzw2yrv6sy22vuwtly talk about replicating content across relays this morning, and so I wrote replicatr:

https://github.com/coracle-social/replicatr

Replicatr is a daemon which listens to one or more indexer relays for `kind 10002` events. When it detects a change in any user's relay selections, it uses negentropy to sync that user's notes to their new relays based on the outbox model.

The neat thing is you don't have to run one. I deployed one this morning which points to indexer.coracle.social, so if your metadata gets published there (or to any of the relays that it mirrors), you're already covered (unless your new outbox relay rejects replicatr's publishes).

๐Ÿ‘€

For ideology it does matter who controls your system prompt?

As a quick experiment over coffee this morning I took that last episode of nostr:nprofile1qyxhwumn8ghj7mn0wvhxcmmvqy0hwumn8ghj7mn0wd68ytn9d9h82mny0fmkzmn6d9njuumsv93k2qpqn00yy9y3704drtpph5wszen64w287nquftkcwcjv7gnnkpk2q54svljnn3 , generated a transcript and ran it through a dialogue agent that runs a 10 round discussion between pre-programmed AI agents around a given prompt.

In this case it was "What is the Underlying Philosophy of Sovereign Engineering based on this discussion between nostr:nprofile1qyxhwumn8ghj7e3h0ghxjme0qyd8wumn8ghj7urewfsk66ty9enxjct5dfskvtnrdakj7qpql2vyh47mk2p0qlsku7hg0vn29faehy9hy34ygaclpn66ukqp3afqta478g and nostr:nprofile1qy28wue69uhnzv3h9cczuvpwxyargwpk8yhsz3rhwvaz7tmed3c8qarfxaj8s6mrw96kvef5dve8wdrsvve8vvehwamxx7rnwejnw6n0d3axu6t3w93kg7tfwechqutvv5ekc6ty9ehku6t0dchsqgrwg6zz9hahfftnsup23q3mnv5pdz46hpj4l2ktdpfu6rhpthhwjv0us2s2 "

Usually I run most things on Claude 4 Sonnet as it s great general purpose model, but i was surprised at how it seemed to crowbar into the conversation a bunch of AI safety viewpoints and ideologies that aren't necessarily in the source transcript.

So I figured lets change out the model for the dialogue agents to Grok-4 and see what it does instead - does the ideology leak out still? Is it different?

I actually think that Grok 4's first assessment kinda nailed it:

> Sovereign Engineering seems like a mindset or movement for building tech in a way that's deeply human-centered, decentralized, and empoweringโ€”think vibing with AI and open protocols like Nostr to create tools that foster freedom, collaboration, and realness without the chains of big tech overlords.

Full summaries from both agents and transcripts are available here:

https://github.com/humansinstitute/everest-pipeliner/wiki/Two-AIs-Analyse-No-Solutions-to-extract-the-Philosophy-of-Sovereign-Engineering

Interesting to see how repeated calls back to agents can lead down a specific pre-programmed ideological path. I think I prefer models with ideologies close to my own, but the alternative is a good echo bubble popper ๐Ÿ˜‰

My guess here is this is more driven by the system prompt / safety layer than the training.

For ideology it does matter who controls your system prompt?

As a quick experiment over coffee this morning I took that last episode of nostr:nprofile1qyxhwumn8ghj7mn0wvhxcmmvqy0hwumn8ghj7mn0wd68ytn9d9h82mny0fmkzmn6d9njuumsv93k2qpqn00yy9y3704drtpph5wszen64w287nquftkcwcjv7gnnkpk2q54svljnn3 , generated a transcript and ran it through a dialogue agent that runs a 10 round discussion between pre-programmed AI agents around a given prompt.

In this case it was "What is the Underlying Philosophy of Sovereign Engineering based on this discussion between nostr:nprofile1qyxhwumn8ghj7e3h0ghxjme0qyd8wumn8ghj7urewfsk66ty9enxjct5dfskvtnrdakj7qpql2vyh47mk2p0qlsku7hg0vn29faehy9hy34ygaclpn66ukqp3afqta478g and nostr:nprofile1qy28wue69uhnzv3h9cczuvpwxyargwpk8yhsz3rhwvaz7tmed3c8qarfxaj8s6mrw96kvef5dve8wdrsvve8vvehwamxx7rnwejnw6n0d3axu6t3w93kg7tfwechqutvv5ekc6ty9ehku6t0dchsqgrwg6zz9hahfftnsup23q3mnv5pdz46hpj4l2ktdpfu6rhpthhwjv0us2s2 "

Usually I run most things on Claude 4 Sonnet as it s great general purpose model, but i was surprised at how it seemed to crowbar into the conversation a bunch of AI safety viewpoints and ideologies that aren't necessarily in the source transcript.

So I figured lets change out the model for the dialogue agents to Grok-4 and see what it does instead - does the ideology leak out still? Is it different?

I actually think that Grok 4's first assessment kinda nailed it:

> Sovereign Engineering seems like a mindset or movement for building tech in a way that's deeply human-centered, decentralized, and empoweringโ€”think vibing with AI and open protocols like Nostr to create tools that foster freedom, collaboration, and realness without the chains of big tech overlords.

Full summaries from both agents and transcripts are available here:

https://github.com/humansinstitute/everest-pipeliner/wiki/Two-AIs-Analyse-No-Solutions-to-extract-the-Philosophy-of-Sovereign-Engineering

Interesting to see how repeated calls back to agents can lead down a specific pre-programmed ideological path. I think I prefer models with ideologies close to my own, but the alternative is a good echo bubble popper ๐Ÿ˜‰

My guess here is this is more driven by the system prompt / safety layer than the training.

๐Ÿ‘€

Iterate, iterate, iterate.

"But, more and more, Iโ€™m realizing that LLMs can be a great tool for thought. A wonderful brainstorming partner."

https://wattenberger.com/thoughts/llms-as-a-tool-for-thought

In this dialogue:

vibeline & vibeline-ui

LLMs as tools, and how to use them

Vervaeke: AI thresholds & the path we must take

Hallucinations and grounding in reality

GPL, LLMs, and open-source licensing

Pablo's multi-agent Roo setup

Are we going to make programmers obsolete?

"When it works it's amazing"

Hiring & training agents

Agents creating RAG databases of NIPs

Different models and their context windows

Generalists vs specialists

"Write drunk, edit sober"

DVMCP.fun

Recklessness and destruction of vibe-coding

Sharing secrets with agents & LLMs

The "no API key" advantage of nostr

What data to trust? And how does nostr help?

Identity, web of trust, and signing data

How to fight AI slop

Marketplaces of code snippets

Restricting agents with expert knowledge

Trusted sources without a central repository

Zapstore as the prime example

"How do you fight off re-inventing GitHub?"

Using large context windows to help with refactoring

Code snippets for Olas, NDK, NIP-60, and more

Using MCP as the base

Using nostr as the underlying substrate

Nostr as the glue & the discovery layer

Why is this important?

Why is this exciting?

"With the shift towards this multi-agent collaboration and orchestration world, you need a neutral substrate that has money/identity/cryptography and web-of-trust baked in, to make everything work."

How to single-shot nostr applications

"Go and create this app"

The agent has money, because of NIP-60/61

PayPerQ

Anthropic and the genius of mcp-tools

Agents zapping & giving SkyNet more money

Are we going to run the mints?

Are agents going to run the mints?

How can we best explain this to our bubble?

Let alone to people outside of our bubble?

Building pipelines of multiple agents

LLM chains & piped Unix tools

OpenAI vs Anthropic

Genius models without tools vs midwit models with tools

Re-thinking software development

LLMs allow you to tackle bigger problems

Increased speed is a paradigm shift

Generalists vs specialists, left brain vs right brain

Nostr as the home for specialists

fiatjaf publishing snippets (reluctantly)

fiatjaf's blossom implementation

Thinking with LLMs

The tension of specialization VS generalization

How the publishing world changed

Stupid faces on YouTube thumbnails

Gaming the algorithm

Will AI slop destroy the attention economy?

Recency bias & hiding publication dates

Undoing platform conditioning as a success metric

Craving realness in a fake attention world

The theater of the attention economy

What TikTok got "right"

Porn, FoodPorn, EarthPorn, etc.

Porn vs Beauty

Smoothness and awe

"Beauty is an angel that could kill you in an instant (but decides not to)."

The success of Joe Rogan & long-form conversations

Smoothness fatigue & how our feeds numb us

Nostr & touching grass

How movement changes conversations

LangChain & DVMs

Central models vs marketplaces

Going from assembly to high-level to conceptual

Natural language VS programming languages

Pablo's code snippets

Writing documentation for LLMs

Shared concepts, shared language, and forks

Vibe-forking open-source software

Spotting vibe-coded interfaces

Visualizing nostr data in a 3D world

Tweets, blog posts, and podcasts

Vibe-producing blog posts from conversations

Tweets are excellent for discovery

Adding context to tweets (long-form posts, podcasts, etc)

Removing the character limit was a mistake

"Everyone's attention span is rekt"

"There is no meaning without friction"

"Nothing worth having ever comes easy"

Being okay with doing the hard thing

Growth hacks & engagement bait

TikTok, theater, and showing faces and emotions

The 1% rule: 99% of internet users are Lurkers

"We are socially malnourished"

Web-of-trust and zaps bring realness

The semantic web does NOT fix this LLMs might

"You can not model the world perfectly"

Hallucination as a requirement for creativity

"With the shift towards this multi-agent collaboration and orchestration world, you need a neutral substrate that has money/identity/cryptography and web-of-trust baked in, to make everything work."

nostr:npub1l2vyh47mk2p0qlsku7hg0vn29faehy9hy34ygaclpn66ukqp3afqutajft & nostr:npub1dergggklka99wwrs92yz8wdjs952h2ux2ha2ed598ngwu9w7a6fsh9xzpc are getting high on glue.