Incontrovertible cryptographic proof has now been established that human conversation is meaningless. In the language of Goldwasser, Micali, Rackoff 1985 we have the "zero knowledge property": if the transcript of a conversation can be simulated without the other party even being there, then no information is conveyed by it. Since the Turing test as originally conceived has now been unquestionably passed, easily, then simulation of human conversation transcripts are regularly produced in subexponential time (with the right computer), proving that the information content of human conversation is zero.

Of course, stated in a kind of stupid way for comic effect (but also there is an intriguing analogy, too). The closer-to-accurate deduction of course, is that humanness is no longer interesting *in the context of verbal conversation*, if indeed over time, these LLMs can fool us, or be clearly better than us, in every possible context, at doing it.

One part that fascinates me is how the bar keeps getting set higher for consciousness; in the 80s or maybe 90s, if you started positing computers that could *easily* pass the Turing test, people would at least seriously consider the question of consciousness. Somehow we have managed to push that far out of mind, except the occasional "blip" in the news when some scientist or engineer is concerned about AIs "going rogue" or "having rights". But almost no one talks about AGI in terms of consciousness, only as a long term danger.

Reply to this note

Please Login to reply.

Discussion

I’ve been obsessed with trying to characterize what the problem is. Here is my first crack at it.

https://open.substack.com/pub/trbouma/p/intentional-theory

Interesting thoughts here, about "intentionality". I'll need to read it properly before saying anything else :)

I’d be keen on your perspective. I’ve been trying to get to the root of what’s exactly different this time around in infrastructure.

I think the LLMs have revealed how little consciousness was going on in the first place.

That's not unreasonable (a sense in which most people most of the time are not acting or thinking with much consciousness), but it depends what consciousness really is I guess.

Categorical questions (ā€œisā€) tend to be both vociferously debated and patently unprofitable.

I tend to discount the (unstated) benefit of categorizing a thought process as ā€œconsciousnessā€ vs. not.

OK but I don't think this is just a categorization problem, the interesting problem here, for me, is defining consciousness. Particularly because of ethics.

I don’t see how ethics requires hard categorization of consciousness. Ethical questions, routinely deal with various levels of consciousness, such as animals, or people with diminished capacity, such as coma patients.

Yeah i agree it's not binary.

We don't even know what consciousness is; we just want to feel special.

Yeah but the 1st half still applies even discounting the 2nd half, right. If there wasn't an ethical Q, it wouldn't be worth worrying about, but, I think there is.