"First, it’s clear that leading AI companies think it’s no longer good enough to build dazzling generative AI tools; they now have to build agents that can accomplish things for people. Second, it’s getting easier than ever to get such AI agents to mimic the behaviors, attitudes, and personalities of real people. What were once two distinct types of agents—simulation agents and tool-based agents—could soon become one thing: AI models that can not only mimic your personality but go out and act on your behalf.
Research on this is underway. Companies like Tavus are hard at work helping users create “digital twins” of themselves. But the company’s CEO, Hassaan Raza, envisions going further, creating AI agents that can take the form of therapists, doctors, and teachers.
If such tools become cheap and easy to build, it will raise lots of new ethical concerns, but two in particular stand out. The first is that these agents could create even more personal, and even more harmful, deepfakes. Image generation tools have already made it simple to create nonconsensual pornography using a single image of a person, but this crisis will only deepen if it’s easy to replicate someone’s voice, preferences, and personality as well. (Park told me he and his team spent more than a year wrestling with ethical issues like this in their latest research project, engaging in many conversations with Stanford’s ethics board and drafting policies on how the participants could withdraw their data and contributions.)"
#AI #GenerativeAI #AIAgents #AIEthics