I really enjoyed this point Long made — which is that ChatGPT users seem particularly affected by the bot’s *apologies*, when it gets something wrong and is told “yeah no that’s wrong”

They think the bot is learning

But it’s not. That’s just a canned reply

But it has a powerful psychological effect on human users

3/x

Reply to this note

Please Login to reply.

Discussion

This is something I wrote about a few months back when that Google engineer was claiming that LaMBDA, Google’s large language model, was sentient

When you looked at some of the records of his chats with the bot, what leapt out was that the chatbot’s replies evoked *vulnerability*

Sherry Turkle has written a lot about how this is a classic trick of bot makers and robot creators — a bot that seems needy or fallible seems more human: https://clivethompson.medium.com/one-weird-trick-to-make-humans-think-an-ai-is-sentient-f77fb661e127

4/4