
Researchers are looking to use a heterogenous group of digital twins agents who replicate the various differences and nuances between actual humans.
Digital twins are gaining traction with many applications, and one of the most compelling is representing humans in market research. Can such an approach work, and accurately predict what humans want?
That’s the question posed by Olivier Toubia, professor of marketing at the Columbia Business School, speaking at the recent Columbia University’s AI Summit.
AI agents, the latest evolution of AI, have promise, but tend to be too homogenous for market research studies, Toubia said. For market research, AI agents, even “super agents,” are too simple, The preferred approach is a heterogenous group of digital-twinned agents, who replicate the various differences and nuances between actual humans.
The goal is to conduct market research against a diverse panel of virtual humans “representative of imperfect humans to be able to simulate the behavior and predict how they would behave,” he explained. “So can we then capture this heterogeneity across people.” This is essential for applications such as opinion surveys, market research, and creativity, the latter approach of capturing human diversity is more important, according to Toubia.
Toubia and his team formed a panel of digital twins based on copies of approximately 2,000 real individuals. The human panelists underwent a series of surveys that covered personality assessments, cognitive tests, economic preference evaluations, and behavioral experiments.
See also: Another Avenue for Digital Twins: Behavioral Modeling for Banks
More Research on Digital Twins
In the future, Toubia and his co-authors plan to create a common benchmark dataset that researchers and companies can use to test and improve digital twin approaches. “Does it actually work?” he asks. “It may not always work. You can get real wrong predictions, maybe biased in different ways.”
Plus, “is it enough just create a persona that is, say, Alex Smith, 34, who is project manager at a tech firm living in Texas. Is that enough? Or do you need more information about the person to capture how they behave and to be able to simulate their behavior in different contexts?”
Aside from asking companies using the digital twin panel. which will provide differing answers, “there’s no uniform benchmark, that we can all use to actually test whether this approach works, or evaluate and imporve the performance.”
The benchmark that Toubia is creating is based on multiple analysis of the test results for the 2,000 human participants, looking for at least 80% consistency across various factors within the tests. He hopes to have the simulated panel publicly available by the end of April, long with an accompany app and website.
“That means that we cannot really predict more than 80% of your behavior. So we can, we have, we have that, that as the baseline, and then we’ll be able to see how well we can predict your behavior based on all these other data from you. And then we’ll be able to create these digital twins of these 2,000 people.”