At a recent conference, the ‘godfather of AI’, Nobel Laureate Geoffrey Hinton, got down to the core issue:

“There’s only two options if you have a tiger cub as a pet. Figure out if you can train it so it never wants to kill you, or get rid of it.”

Meaning: If you give AI a job to do, a goal, it’ll relentlessly pursue that goal, no matter what.

If you don’t build in extremely tight limitations and guard rails, AI won’t consider the safety, well-being, and survival of humans a barrier. It’ll jump the barrier.

~Jon Rappaport~

Reply to this note

Please Login to reply.

Discussion

No replies yet.