This is something I wrote about a few months back when that Google engineer was claiming that LaMBDA, Google’s large language model, was sentient

When you looked at some of the records of his chats with the bot, what leapt out was that the chatbot’s replies evoked *vulnerability*

Sherry Turkle has written a lot about how this is a classic trick of bot makers and robot creators — a bot that seems needy or fallible seems more human: https://clivethompson.medium.com/one-weird-trick-to-make-humans-think-an-ai-is-sentient-f77fb661e127

4/4

Reply to this note

Please Login to reply.

Discussion

No replies yet.