What kind of design tweaks make a difference when it comes to consumer experience?
In our studies, we also found it makes a difference when you add a human voice, name, or avatar to the algorithm. When people hear good news that way, they feel pretty much the same as they do from a real person. Anthropomorphic cues have quite a deep effect on people; they know the AI’s not human, but there's something quite hardwired in us to respond to social cues. Of course, there can be ethical issues in making algorithms as human as possible, so companies need to be careful.
Here’s an example of a simple design change for recommender systems, algorithms that offer personalized suggestions. Say a new customer arrives on a video streaming channel. At first, the platform doesn't know enough about them to make recommendations. So they typically start by getting information about you, giving you choices of movies and saying, “Pick the ones you like.” Then, as you begin to consume, the algorithm learns about what you enjoy and does a better and better job of making recommendations.
But how a company elicits your preferences at the very beginning can impact your choices. If an option is already ticked, you're much more likely to go along with it than if it’s blank, and you need to tick it to select it. I’m working on a project now where we have one group of people choose video categories by ticking and the other by un-ticking. And we’ve seen that just this small tweak can lead to a different set of preferences for a person that impacts how the algorithm learns about them — and down the line means they are forever watching different content.
Clearly AI can expand our worlds — from new makeup to the videos we watch. But you’ve also suggested that there’s a dark side.
Our research shows that in some ways AI can actually limit human experience.² We all know the example of being on a video streaming platform and the recommended system bringing you down the same rabbit hole, which can be a major factor in people developing very narrow perspectives. But we’re also talking about deskilling. If AI is taking over some tasks, then to what extent are we losing those capabilities? Maybe in some cases it’s okay, like handwriting, potentially. But should you give high schoolers generative AI, which can think for them in a way? Could the AI prevent them from developing the critical learning skills they need to develop thoughts and articulate them? That could be a big problem.
How do we correct for that when developing apps and models?
Researchers came up with a sports metaphor: The same AI can be deployed in a way that acts like steroids, improving your short-term outcome but damaging a capability in the long term, or as a good sneaker that boosts your performance.³ Take a spell check tool. When you type, it can automatically just change the words — which is extremely convenient, but you never get to learn any spelling. Compare that to a tool that highlights where the mistake is, and when I click, I see the right spelling and can choose to apply it. That still lets me go faster because I can be sloppy in my typing, but the tool hasn’t removed my ability to spell, or at least not entirely. Even better, the same basic algorithm might be deployed to act like a coach by giving me the choice with each spelling suggestion to click “if you want to learn more.”
In a 2013 movie, a man fell in love with his virtual assistant. Today, generative AI takes humanness to a whole other level. How chummy are consumers getting with their chatbots?
Our studies show that just 15 minutes spent talking to generative AI is actually quite powerful in alleviating feelings of loneliness.⁴ When we tell subjects to do whatever they want with an AI companion and then ask how they’re feeling, we find it helps as much as, if not even more, than talking to a real person. It does matter how you prompt the engine to behave, though. If you tell it to be a friendly conversationalist, it will provide a much bigger positive effect than prompting it to be just a helpful assistant.
How can companies use generative AI to understand their customers better?
One of the more sci-fi-like applications of generative AI is the idea that you can interview people who don't exist and get real insights about the people who do.
For example, a company can create digital twins of its customers and interview them to come up with the most relevant and perfect advertising for a better response. You can get feedback on which words you should use and those you shouldn’t, and whatever else will help you win the person over. It's almost like having the opportunity to make a great first impression twice.
Looking ahead, what are you excited about?
So far, AI has been able to partially take over tasks we are not great at, giving us extra time to do more interesting things. Think of it as “human or AI.” But I’m inspired by what I call “human and AI,” where the goal is to figure out how you build a machine that optimally complements what a human can do. This requires more than technical insights; you need domain expertise and a deep understanding of human psychology and management, among other things. But I believe that in many situations, our weaknesses are AI’s strengths (such as coming up with possible names for a new brand) and the AI’s weaknesses are our strengths (figuring out the best one/picking out the winner), and with the right approach and innovation, we can develop a new way to support human flourishing.