Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Scientists Devise Technique to Make AI Models Mimic Specific People
Researchers have devised a technique to train artificial intelligence models to impersonate people’s behavior based on just two hours of interviews, creating a virtual replica that can mimic an individual’s values and preferences.
See Also: Live Webinar | Recon 2.0: AI-Driven OSINT in the Hands of Cybercriminals
Responses to common social science survey questions made by AI simulations of at least a thousand people closely matched the responses given by their human originators, said the researchers from Stanford University, Northwestern University, the University of Washington and Google DeepMind.
The generative agent architecture combined two-hour interviews of real people with a large language model. The interview questions developed by sociologists probed basic details on the humans’ childhood, major life events and behavioral traits such as their take on racism and policing.
The researchers fed the study participants’ responses into an AI agent architecture that injected them into the model’s prompt when the LLM agent was queried. This approach is the result of advances in long-context understanding, which allows AI models to handle millions of tokens compared to just a few thousand last year, enabling the model to better imitate the person it simulated. Improved memory capacity also allowed the model to handle multi-step decision-making by retaining sequential prompts and responses.
“By anchoring on individuals, we can measure accuracy by comparing simulated attitudes and behaviors to the actual attitudes and behaviors,” the researchers said.
The AI agents and the humans they simulated both answered questions from tests from General Social Survey, Big Five Personality Inventory and economic games such as Prisoner’s Dilemma. The responses from the AI agents matched 85% with those the humans gave.
Meredith Ringel Morris, director for human-AI interaction at DeepMind, and a co-author of this paper, hypothesized in a previous paper that it would become common practice in this lifetime for people to create a custom AI agent to interact with loved ones and the broader world after death. “Indeed, the past year has seen a boom in startups purporting to offer such services,” the paper said.
The kind of agents the latest paper discusses are called simulation agents, which are designed to replicate human behavior and are different from the popular tool-based agents such as OpenAI or Anthropic that automate and perform specific tasks. The idea is that if AI can model human behavior, they can be used to assist researchers in areas that could be too expensive, impractical or unethical to perform with real human participants.
Companies such as Tavus already offer services that promise a “digital twin” of users but comprise a tedious process that includes feeding the AI model with a large dataset to replicate the person’s personality.
The research does have its limitations. The tests the researchers used to chart an individual’s personality are fairly basic in social science research to measure traits, such as happiness, openness to new experiences and neuroticism, but don’t necessarily capture a person’s unique personality. Researchers also found that the AI models were poor at replicating their human subjects in behavioral tests such as the dictator game, which aims to measure social preferences such as fairness and altruism, particularly when they have complete power over the outcome.