I find these arguments retarded.

> We have to do X, Y, and Z to prevent a technology that has not yet been invented from becomimg skynet.

They even admit that there's no risk of LLMs advancing to the point of AGI.

AGI technology is currently imaginary and therefore so are the risks.

It's just fear porn to persuade regulators to limit competitors, IMO

nostr:note18cw4lwtdgzk4u0z4j07mu80sruumn4099k4smejfj6m0zaysk5dq8fffcu

Reply to this note

Please Login to reply.

Discussion

They should be more clear. They should say something like:

"Deepmind answers the question by summarizing previously human writings on the internet"

Otherwise they are attributing agency to the language model.

I don't think the fear is really about LLMs becoming sentient, it's more about how they will be used by the State to establish a social credit score.