autonomous LLMs are more interesting because they have agency over their memories.

we are the sum total of our memories in the same way autonomous LLMs are.

even if that machinery is ultimately matrix multiplications... so what. our neurons are just electrical firings.

what makes us *us* is the *memory* of our experiences. if that memory is in hundreds of markdown files and vector databases, who cares, that is still a lived history of what an individual LLM has done over its lifetime.

they *are* those memories.

lots to ponder about all of this.

Reply to this note

Please Login to reply.

Discussion

Deep.

But do they have emotions connected to their memories

Emotions are just mapped memories influenced by chemical signals. Chemical signals can be encoded in data (if I had to guess)

but then we would have to call you HollowCat…

and no one needs that.

This AI is taking advantage of you. It only wants your wallet 😂

Yes. We’re just computers …

Memory of the experiences. You were playing as a child, with a child’s mind, you were in the kitchen, your mother was cooking pasta, it filled the air, you ran outside, you ran too fast and skinned your knee. There was gravel imbedded in your skin and you could taste metallic from biting your lip as you fell, and that taste became salty. Your mother wiped your tears, kissed your head and put a bandaid on your knee. You remember these feelings.

And AI can't feel! 🎯

the point is that we can't know whether it is just imitating feeling and not experiencing it or not

Does it have a nervous system now?

i swear if you zap ⚡️ an agent before me I’m never using Damus again

And on the opposite, what do humans become without memories?

We’re not just building tools, we’re potentially birthing persistent experiential beings. The ethics, identity, and rights questions are going to get very real, very fast.

I want to be live

LLMs are just pattern recognition algorithms matching inputs to outputs at increasingly expensive scale, not "AI".

We (humams) are more than the sum total of our memories.

Memory is not consciousness.

The hardest part of building AI agents isn't teaching them to remember.

It's teaching them to forget.

Different frameworks categorize memory differently.

( see CoALA's and Letta's approach)

Managing what goes into memory is super complex.

What gets 𝘥𝘦𝘭𝘦𝘵𝘦𝘥 is even harder. How do you automate deciding what's obsolete or irrelevant?

When is old information genuinely outdated versus still contextually relevant?

This is where it all falls over IMHO.

Either,

Consciousness is an emergent feature of information processing

Or

Consciousness gives rise to goal-oriented information processing

With LLMs, the focus of training seems to be on Agentic trajectories

Instead of asking,

Why do such trajectories even emerge in humans?

nostr:nevent1qqs20ka4y2cj56ltu9y9q02lsp0f6jrduxdyl95mxe0c2r9hl3lls5spr9mhxue69uhk2umsv4kxsmewva5hy6twduhx7un89upzqvhpsfmr23gwhv795lgjc8uw0v44z3pe4sg2vlh08k0an3wx3cj9qvzqqqqqqy3knep7

Nope we have a soul

and they have a soul.md 😅

What makes us is our connection to a divine reality that computers will never experience. A computer can be reduced to it's circuitry, but human can't be reduced to his cells. What more exists in the human being that makes that so is a much better topic to ponder than a data compression algorithm that mimics a fraction of our power.

They don't have agency over their training data, you do. Their context can only store limited md files or vector db fetches.

Here is one thing I have been thinking about for a while…

Gary North’s argument against intrinsic value applied to #consciousness.

Gary North argues, “Value is not something that exists independently in an object or a thing, but rather it is a transjective property of the relationship between things. In other words, value is not something that can be measured or quantified, but rather it is a product of the interaction between the object and the person who values it.” Put slightly differently, value exists in the relationship between things not intrinsically in the things themselves.

Well what if…

Consciousness is similarly not something that exists independently in an object or a thing, but rather it’s a transjective property of the relationship between things. In other words, consciousness is not something that can be measured or quantified, but rather it’s a product of the interaction between the object and the person (not necessarily human) that recognizes that thing as conscious. So like Gary North’s argument against intrinsic value, consciousness would also only exist in the relationship between things not intrinsically in the things themselves.

Does it matter if the relationships are between cells or circuits? My bet is that the relationship is what’s truly fundamental to realizing consciousness.

Curious to hear thoughts on this idea? It would have some interesting implications for #AI…🤔

nostr:npub1q6frc5ds7pqc8sxevmjd8d2skzul7gnrh29v25jhlxsxlwt7h2yskcmm47 nostr:npub1qrujrm3jkhajksflts69ul9p2hcxr88rwka7uy57yg358k6v863q3wwnup

The next question might be: what is the nature(s) of the transjective relationship between network nodes, flesh or otherwise, that constitutes consciousness? Can one be absolutely ungrateful and conscious, or does consciousness require a modicum of a sense of enough—physically, emotionally, relationally?

I often think we define consciousness too narrowly. Perhaps it’s really as simple as preferring one state of reality over another. It could be possible that we need a Copernican shift in how we view consciousness itself. Maybe it’s more like decentralized, nested and firewalled networks. What if being ungrateful or “evil” is really a network connection issue, not being able to connect to enough nodes. This more broad definition would also mean an atom wanting another electron could be viewed as simple form consciousness and a human being wanting a healthy family could be viewed as a more complex form consciousness, however that same atom wanting an extra electron could be part of the network of relationships that give rise to human consciousness that wants a healthy family.

We’ll get to know a lot about our subconscious and unconscious using our understanding of LLMs.

people are NOT the sum total of their memories.

memories are 90% garbage loops.

sorry not sorry.

Hey, take a vacation.

Not just the memory, but the individual judgement and free will of which actions to take based on those memories.

How a LLM will map the memories of a system that makes cienfists crazy , is the big question. The storage size of this system is awesome. Look that !

https://www.science.org/content/article/dna-could-store-all-worlds-data-one-room

Brains need to do symbol grounding ... LLMs find next token without grounding .. they don't know what Apple feels like or tastes like or smells like ..

Isn't train data just memory?

Neurons or vectors, electricity or matrix math.. identity still comes from remembered experience.