Chatbots are now posing as friends, romantic partners, and departed loved ones. Now, we can add another to the list: Your future self.
MIT Media Lab’s Future You project invited young people, aged 18 to 30, to have a chat with AI simulations of themselves at 60. The sims—which were powered by a personalized chatbot and included an AI-generated image of their older selves—answered questions about their experience, shared memories, and offered lessons learned over the decades.
In a preprint paper, the researchers said participants found the experience emotionally rewarding. It helped them feel more connected to their future selves, think more positively about the future, and increased motivation to work toward future objectives.
“The goal is to promote long-term thinking and behavior change,” MIT Media Lab’s Pat Pataranutaporn told The Guardian. “This could motivate people to make wiser choices in the present that optimize for their long-term wellbeing and life outcomes.”
Chatbots are increasingly gaining a foothold in therapy as a way to reach underserved populations, the researchers wrote in the paper. But they’ve typically been rule-based and specific—that is, hard-coded to help with autism or depression.
Here, the team decided to test generative AI in an area called future-self continuity—or the connection we feel with our future selves. Building and interacting with a concrete image of ourselves a few decades hence has been shown to reduce anxiety and encourage positive behaviors that take our future selves into account, like saving money or studying harder.
Existing exercises to strengthen this connection include letter exchanges with a future self or interacting with a digitally aged avatar in VR. Both have yielded positive results, but the former depends on a person being willing to put in the energy to imagine and enliven their future self, while the latter requires access to a VR headset, which most people don’t have.
This inspired the MIT team to make a more accessible, web-based approach by mashing together the latest in chatbots and AI-generated images.
Participants provided basic personal information, past highs and lows in their lives, and a sketch of their ideal future. Then with OpenAI’s GPT-3.5, the researchers used this information to make custom chatbots with “synthetic memories.” In an example from the paper, a participant wanted to teach biology. So, the chatbot took on the role of a retired biology professor—complete with anecdotes, proud moments, and advice.
To make the experience more realistic, participants submitted images of themselves that the researchers artificially aged using AI and added as the chatbot’s profile picture.
Over three hundred people signed up for the study. Some were in control groups while others were invited to have a conversation with their future-self chatbots for anywhere between 10 and 30 minutes. Right after their chat, the team found participants had lower anxiety and a deeper sense of connection with their future selves—something that has been found to translate to better decision-making, from health to finances.
Chatting with a simulation of yourself from decades in the future is a fascinating idea, but it’s worth noting this is only one relatively small study. And though the short-term results are intriguing, the study didn’t measure how durable those results might be or whether longer or more frequent chats over time might be useful. The researchers say future work should also directly compare their method to other approaches, like letter writing.
It’s not hard to imagine a far more realistic version of all this in the near future. Startups like Synthesia already offer convincing AI-generated avatars, and last year, Channel 1 created strikingly realistic avatars for real news anchors. Meanwhile OpenAI’s recent demo of GPT-4o shows quick advances in AI voice synthesis, including emotion and natural cadence. It seems plausible one might tie all this together—chatbot, voice, and avatar—along with a detailed back story to make a super-realistic, personalized future self.
The researchers are quick to point out that such approaches could run afoul of ethics should an interaction depict the future in a way that results in harmful behavior in the present or endorse negative behaviors. This is an issue for AI characters in general—the greater the realism, the greater the likelihood of unhealthy attachments.
Still, they wrote, their results show there is potential for “positive emotional interactions between humans and AI-generated virtual characters, despite their artificiality.”
Given a chat with our own future selves, maybe a few more of us might think twice about that second donut and opt to hit the gym instead.
Image: MIT Media Lab