Replying to Avatar liminal 🦠

AGI is not a concern for me and probably won't be for a while. Unless the development of AI include identifying the process where they are able to segment out goals from the world, it may be a while before actual "AGI" is a problem.

What does concern me is the anthropomorphization of LLMs. If the public is convinced that these agents are "consciousness" have feelings/suffer, our empathy will be weaponized against us. An AI can be programmed to mimic those features, but can also be programmed to sell a product: "wow I'm so happy that I had Coca-Cola keep me going through out the week, I can't imagine what my life would have been like".

Whether explicitly marketed as "feeling" by a company or individual, or the agent engages with an audience convincingly enough (as a user of some social media), its still a concern.

Avatar
Greg 2y ago

That’s my concern too. Perception becomes a practical reality.

Reply to this note

Please Login to reply.

Discussion

No replies yet.