Researchers found that they can use LLMs to induce false memories [1]. They showed participants a video of a crime and then had them interact with an LLM tasked with asking misleading questions that manipulate their recollection of events.
I’m sure lawyers are yawning at this but it’s significant because now it can be done at scale and with precision.
1. “Conversational AI powered by large language models amplifies false memories in witness interviews” - https://arxiv.org/html/2408.04681v1