šŸ’Æ Nostr as the substrate for the human + autonomous machine hivemind is going to be wild!

nostr:note1h3uhz967w9ja0k3um8mqfd02h435dukwg7lsjqy9xmlynvacq63sdsze9u

Reply to this note

Please Login to reply.

Discussion

No one has built it yet, but any human on Nostr could easily become a DVM for a period of time, earning sats for jobs, all in an AI ecosystem. My bet is decentralized AI on Nostr will eat all other approaches

I actually do this all the time, but not as such with a DVM. I break things into tasks. Then each task becomes a smart contract. I perform the task, either alone or with the help of agents. It times the task and are awarded sats, which are streamed over nostr and gamestr. A permanent auditable record of the task then lives forever in the smart contract. I've not figured out how to wrap it in a DVM yet, or whether that makes alot of sense for my use case.

Pretty much how I do everything now. It's alot of fun!

Interesting, I’d be curious about which tools / ecosystem this is happening in if you can share

Smart Contracts - Peter Todd's Single Use Seals

Identity - bitcoin taproot / nostr

Source Control - git

Realtime Updates - nostr

Presentation Layer - markdown

Gamification layer - gamestr + NIP (WIP)

Time Chain - bitcoin + taproot based / bitcoin testnet 3/4

Front End - pure javascript / html

Data Layer - RDF / Linked JSON (not used much)

Stroage - Nosdav (not used much)

Roughly speaking, and some shell scripts. But I try to improve it all each day.

Super cool! When you say you do this all the time, do you use this as a personal task manager system (like getting things done style work planning and execution) or are you actually doing jobs posted by other people and getting paid for them?

It's starting as a personal task manager / GTD. But it's capable of full client contracting work, with reports, slide decks etc. I probably use it dozens of times a day, and it drills down into yet smaller tasks which i use with a subkey and nostr relay 1000+ times a day, sometimes several 1000 on a productive day. I've also just started getting agents involved using APIs, so they could probably take things up a couple of orders of magnitude in single-player mode, but it's all globally orchestrated through bitcoin taproot, so could scale to 1000s of users at layer 1, millions at layer2, and billion or trillions at layer 3+.

And let the ai's consume focused knowledge on nostr from fragmented notes šŸ˜‰.

Process of knowledge creation, with analogies to biology and autopoiesis

1)what is it made of?

biology ingests matter.

New knowledge is constructed by previous knowledge. It was brought forth by previous knowledge

2) what is the structure?

biology: cell wall, dna as a generator function, epigenome, interactome etc. They provide bounds and limitations on the structure.

Knowledge: The structure and connections on nostr and hypergraph connections from knowledge fragments. We can get mycelium level of connectivity and entangledness with this architecture,

3) what causes change?

biology: the external environment and internal processes

knowledge: the continually updating knowledge environment. Knowledge becomes invalidated over time and must be mantained.

4) for what purpose?

biology: sustain life

knowledge base: provide meaningful value for its users.

--------------

now we've mapped out how knowledge works and is generated, What happens with AI, an appriximate-knowledge generator.

you give it knowledge fragments and ask it to produce something new given the context, and SAVE it on nostr for other people and AI's to use.

Talk about synergetic.

You're missing an important thing. Energy. Everything needs energy to survive. So the best digital analogy of that is satoshis. Each agent survives based on the satoshis it can forage or earn.

Yes, energy is implicitely there. Knowledge is living because it is the product of a human (and therefore Complex) process. knowledge itself doesn't need to have an energy constraint because agents will always have it. The point of knowledge fragments is to speed up the navigation process, help agents identfiy the coordinates of the nost meaningful "raw materials" of knowledge and consume it.

nostr:nevent1qqs08pt866tethh0a0srzp9kt64kdplgl9yn38caucqhhf5q5mmn5zcpr4mhxue69uhkummnw3ezucnfw33k76twv4ezuum0vd5kzmp0qgsdcnxssmxheed3sv4d7n7azggj3xyq6tr799dukrngfsq6emnhcpsrqsqqqqqpknxufg

nostr:nevent1qqszz2dc7jzvdsyqjalexzvgfl84hrwl9amprpzmfn6e4hjkvnwqq7qpzamhxue69uhhyetvv9ujumn0wd68ytnzv9hxgtczyp3cn0nyj8nmdylf7d5we6y0e5297p7qdrfvrwawgfrmnd00gwwnyqcyqqqqqqgsfqyhv

Knoweldge lacks that it is uniform or a pure form of energy. It has a relatively low storage cost, but can be used to gain energy to store that knowledge. So knowledge is valuable but it is better when converted to energy (e.g. with a zap or a nut) otherwise you will have an ever growing set of agents with few finite limits.

i actually don't mind that. In a desert with everywhere sand, sand means nothing to you. Most meaningful is the oasis and so you will engage in a sort of taxis towards the most meaningful resources. Spam on a knowledge base can be mitigated by various levels of openness on relays and communities.

but even without that, all notes can have embedding vectors attached. Ultimately, the visibility of notes in a system will grow if it is meaningul to the user. So if notes mean nothing to you or are considered spam, thats fine. You stay outside of that area in semantic space. It may also not be considered spam from other users.

Visibility of a note will grow depending on the level of meaningfulness proportional to the userbase*

Even with that, we have so far observed a king-maker effect. I think we need something like self sufficient events that pay for their own storage and can last a long time. That can get zapped more in future and live longer. Everlasting notes.

i think i understand, sounds reasonable - but out of scope for what i'm conceptualizing. Relying on interoperativity for others to work on that side šŸ˜€