Generative AI feels, in many ways, like it was made to disrupt the marketing industry. It can create images and ad copy in seconds, speeding up creative workflows. It can summarize product reviews for consumers, helping drive purchasing decisions. And one day, chatbots may become a platform for paid product mentions—though major developers have avoided it so far.
These are all significant use cases, enough to disrupt online advertising on their own. But some marketers are pushing the envelope even further, developing tools that are more at home in a sci-fi novel than a marketing office.
“We’re still only scratching the surface,” said Stefano Puntoni, Professor of Marketing at the Wharton School of Business. He’s also co-director of Wharton Human-AI Research.
He described a future where marketers can create chatbots with human-like personalities. These personas are meant to mimic real humans, adding a new dimension to market research, and pushing the boundaries of what AI can do in this sector.
I, marketing
“Hey ChatGPT, which of these ads are you most likely to respond to?”
That’s the kind of question some marketers are asking their chatbots, after training them on consumer data. The idea, explained Puntoni, is for the AI to personify a character that matches an audience segment. How might a White, middle-aged, woman accountant from rural Ohio respond to this kind of messaging? What about a 25-year-old man from Mexico, fresh out of college in a major city?
“It is possible to learn useful things about people by talking not to people, but to machines,” said Puntoni.
He calls these digital personalities “synthetic personas” – an evolution of a marketing tool that has existed for years. Before, marketing professionals would dream up characters to represent an audience segment they are trying to reach. These personas have interests, personality traits, and specific behaviours – all information marketers can use to craft relevant, targeted messages.
But these personas could never talk back to them. Until now. An AI-powered chatbot can take on a persona and engage with marketers directly, as if it was a real consumer. It uses consumer data to mimic how an audience segment thinks, communicates, and behaves. And its response to marketing messages is predictive of how consumers may react.
That’s the theory, anyway. And there’s a couple ways to set it up, said Puntoni.
One is to feed data into the AI about the audience segment you’re trying to reach, then ask it to create a persona based on this information. The Italian coffee company Lavazza recently started using this method. They had their chatbot process loads of consumer data, and use it to embody audiences from key international markets. “Adele” is used in France, while “Lucy” represents British and American consumers.
Another way to build a synthetic persona is to create a “digital twin” of a real, human person, said Puntoni. The AI ingests data about an individual, learns from it, and attempts to mimic that person.
The Columbia Business School created a dataset for this exact purpose. They surveyed over 2,000 people with more than 500 questions. This data can help create digital counterparts of each survey participant – which is not only helpful for marketers, but could be a boon for psychology, political science, and countless other fields.
AI-powered synthetic personas certainly sound sci-fi. But we’re far from being able to upload a human consciousness to the cloud. Puntoni calls these personas “digital approximations,” not exact replicas.
But that doesn’t make synthetic personas any less impactful.
“Most decisions are taken without any empirical data,” he said. “Because we don’t have time and money to do market research for everything.”
Synthetic personas can help fill that gap. In the absence of focus groups and in-person interviews, marketers can research their audience through digital personalities.
“Maybe it’s good data, maybe it’s bad data,” he said. “But maybe it’s better than no data.”
Where no AI has gone before
Current AI chatbots are more than powerful enough to take on synthetic personas, said Puntoni. The next frontier, though, is human heuristics.
“When consumers make shopping decisions, they use heuristics to make those decisions,” said Yu Ma, Professor of Marketing at McGill University. “It’s not a rational decision-making process.”
Heuristics are the mental shortcuts humans take to process information quickly. We carry with us a body of knowledge that informs how we interpret what’s in front of us, even if we don’t know all the details. If you live in a rainy climate, you don’t need to check the weather before packing an umbrella – you already know it’s a good idea, just in case.
This is also where human bias comes into play, said Ma. If you’re depending only on your heuristics to make decisions, you’re only processing the world through your own lens. You’re not incorporating new information into your decision-making process.
AI, on the other hand, doesn’t take these kinds of shortcuts. It receives new information, then incorporates it into its decision-making.
This is an area where synthetic personas lose their authenticity, and some of their effectiveness for marketers. AI’s inability to work with heuristics is an inability to fully simulate humans. So, when a synthetic persona engages with a marketing message, it’s not doing so in the most realistic way. It makes decisions with more information than a human naturally would.
But that doesn’t mean AI won’t get there, said Puntoni. Many studies are underway, trying replicate human patterns.
“It’s moving very rapidly,” he said.
This article was written by Eric Dicaire, Managing Editor, McGill Delve.
Featured experts

Yu Ma






