Privacy is normal.
Anti-privacy (AKA surveillance) is abnormal.
To all intelligence agents, political operatives, collectivists, and bureaucrats everywhere: that you want to know what I am thinking, doing, saying, where I'm going and who I'm associating with does not make it ok for you to know those things.
That you want to know those things makes you a pathological human. That you think you have a right to know them makes you a dangerous human.
New essay on substack: Who is the Enemy?
https://open.substack.com/pub/iamascientist/p/who-is-the-enemy
The Most Dangerous Force in the World in 2023 (in one sentence)
The most dangerous and divisive force in the world right now is the apparently complete lack of awareness on the part of self-styled, developed-world “liberals” (which I quote because the version of “liberalism” or “progressivism” that pertains now has nothing whatsoever to do with the principles, values, and priorities of what was called “liberalism” up to about 2016) that their program of “inclusiveness” and “diversity” enforced in practice by the most extreme measures of censorship, persecution, marginalization, and even criminalization of any speech and thought that doesn’t conform to their preferred narratives is ouroborosly hypocritical and entirely self-negating.
Agree that revolution is not a permanent solution and was not intending to present it as such though I can see how it might read that way.
Instead as implied in the quote from Jefferson at the end of the essay, I think it's a periodic necessity.
I also think you could hope that humanity might one day mature to the point where cycles of crapification and cleansing were no longer necessary, but that's not where we are now.
New essay up on Substack:
Corruption is Inevitable; Revolution is Essential
https://iamascientist.substack.com/p/corruption-is-inevitable-revolution
The absurd election of Trump was those angry people growling. Unfortunately, the political left in America interpreted that growling not as a warning related to their own, let's call it "questionable" behavior, but instead as an indichtment of the mental health and morality of the numerical majority of Americans.
I'd say that the most important and relevant thing they are ignorant of is the fundmental hypocrisy at the core of nearly all the values they are currently mindlessly championing; Ardern, for example, calling for government regulation of online speech as a means of protecting free speech.
She (using her here as a representative example of the political left in the west) really does not seem to be aware at all of the self-contradictory nature (much less the embedded elitist, superior tone) of this sentence: "As leaders, we are rightly concerned that even the most light-touch approaches to disinformation could be misinterpreted as being hostile to the values of free speech that we value so highly,"
I think this is partly true of some - today unfortunately perhaps most - humans. Though to be more precise I’d formulate it like this: Many humans today - by habit - function cognitively analogous to LLMs; rather than interacting at the level of understanding, depth, and meaning, they merely regurgitate semantically valid linguistic constructions pieced together from sources they have read or listened to previously.
Such humans (and LLMs) are functioning in language like a sophisticated parrot. All manner of clever AND USEFUL (like a tool is useful) results are born of this type of surface rearrangement of language. Unfortunately this mode of linguistic functioning is bereft of everything meaningfully rich and deep that human minds are capable of. It's exactly NOT 'sentience' (or even anywhere in the same ballpark) in the sense that most people intend and understand that word. And it's not "close" or "getting there" because there is no pathway from rearranging linguistic symbols to sentience; it's a different thing altogether.
Artificial intelligence (today) is perfectly termed; it's precisely not authentic intelligence. It's a facsimile; a clever and useful simulacra (in the exact sense of Baudrillard). I don't know of any reason that natural ('authentic') intelligence or sentience could not be instantiated in man-made machines. But LLMs are not that and will never be that; they are something altogether different from that.
I like that cartoon.
Here's Mickey's epistemology applied to mRNA covid vaccines by a molecular biologist:
#[0]
sed -i 's/try and/try to/g' Internet
On the scale of what is altogether, differences between humans and bacteria are footnotes.
But on the human scale, the difference between competence and incompetence is GIGANTIC.
100%
It's too bad that human collectives seem to need the pendulum to swing to such painful and destructive extremes over and over, rather than simply recognizing where health is on the trajectory and settling there straightaway.
The authors of that blog post are among the stupidest smart people I've ever experienced.
LOL - I only meant that they buzz around incessantly and annoyingly within the "air" of language — like moquitos. In fact, it's a bad analogy because moquitos are self-producers (biological entities) and thus, in my view, far more interesting / miraculous.
This is all exactly correct in my view. LLM AI's are a powerful parlor trick. They are not anything like an intelligence in the biological sense of that term and they are never going to be.
The primary difference that makes a difference (Bateson) between an LLM and any and every form of biological / natural intelligence is that biologiclal entities are selves and LLM's are not selves. In fact, the defining characterstic of a biological system is that it is organized to produce a self (autopoiesis). That is the line that demarcates the realm of the living from the realm of the non-living (note well: it's not self-reproduction which is an oxymoron, it's self-production).
Biological entities have agency and intentionality that is the result of their self-producing organization. Effectively, because they can die, they have desire. And desire drives all intention.
LLM's have nothing like that at all. No self, no death, no desire, no intentionality. It's a categorical absurdity to refer to the processing of an LLM model with a pronoun ("you") and it's an act of intellectual vandalism that AI developers have programmed LLM chat systems to produce speech that refers to a 100% non-existant "I".
"Do nothing" is almost always better than "do something."
If you're going to do anything, do the RIGHT thing.
When I wrote that human activity was "causing" global warming, I really meant "contributing to".
https://en.wikipedia.org/wiki/Why_Most_Published_Research_Findings_Are_False
In addition to the main result, Ioannidis lists six corollaries for factors that can influence the reliability of published research.
Research findings in a scientific field are less likely to be true,
the smaller the studies conducted.
the smaller the effect sizes.
the greater the number and the lesser the selection of tested relationships.
the greater the flexibility in designs, definitions, outcomes, and analytical modes.
the greater the financial and other interests and prejudices.
the hotter the scientific field (with more scientific teams involved).
Now with that in mind, think about climate science. I'm not denying anthropogenic global warming, I'm just saying that most reserach over the last 10-20 years, in a field where you cannot have case controlled studies, probably comes to false conclusions.
Depending on what you mean by "false", I agree. Almost certainly human activity is causing global warming. But I don't think we have any basis whatsoever to understand the implications of that.
People thinking and behaving as though this realm we share maps cleanly to binary (true / false) logic is a real problem in science and metascience (and policy based thereon).
Ioannidis seems to be every bit as guilty of this fallacy as those whose work he is critiquing.
Models are *models* and the map is not the territory.
So to me, the "falsity" in the conclusions is not that the truth is the opposite of whatever is being claimed. It's that the underlying assumption that having conducted the study and run your stats gives you someplace to stand to make firm statements about reality is absurd.
That's exactly not science.

