Nitasha Tiku
An array of popular apps are offering AI companions to millions of predominantly female users who are spinning up AI girlfriends, AI husbands, AI therapists - even AI parents - despite long-standing warnings from researchers about the potential emotional toll of interacting with humanlike chatbots.
While artificial intelligence companies struggle to convince the public that chatbots are essential business tools, a growing audience is spending hours building personal relationships with AI. In September, the average user on the companion app Character.ai spent 93 minutes a day talking to one of its user-generated chatbots, often based on popular characters from anime and gaming, according to global data on iOS and Android devices from market intelligence firm Sensor Tower.
That’s 18 minutes longer than the average user spent on TikTok. And it’s nearly eight times longer than the average user spent on ChatGPT, which is designed to help “get answers, find inspiration and be more productive.”
These users don’t always stick around, but companies are wielding data to keep customers coming back.
The Palo Alto-based Chai Research - a Character.ai competitor - studied the chat preferences of tens of thousands of users to entice consumers to spend even more time on the app, the company wrote in a paper last year. In September, the average Chai user spent 72 minutes a day in the app, talking to customized chatbots, which can be given personality traits like “toxic,” “violent,” “agreeable” or “introverted.”
Some Silicon Valley investors and executives are finding the flood of dedicated users - who watch adds or pay for monthly subscription fees - hard to resist. While Big Tech companies have mostly steered clear of AI companions, which tend to draw users interested in sexually explicit interactions, app stores are now filled with companion apps from lesser-known companies in the United States, Hong Kong and Cyprus, as well as popular Chinese-owned apps, such as Talkie AI and Poly.AI.
“Maybe the human part of human connection is overstated,” said Andreessen Horowitz partner Anish Acharya, describing the intimacy and acceptance AI chatbots can provide after his venture firm invested $150 million (R2.7 billion) in Character.ai, at a billion-dollar valuation. Chai also has raised funds, including from an AI cloud company backed by powerhouse chipmaker, Nvidia.
Proponents of the apps argue they’re harmless fun and can be a lifeline for people coping with anxiety and isolation - an idea seeded by company executives who have pitched the tools as a cure for what the US Surgeon General has called an epidemic of loneliness.
Jenny, an 18-year-old high school student in northern Texas, spent more than three hours a day chatting with AI companions this summer - mostly versions of her favoUrite anime character, a protective older brother from the series, “Demon Slayer.”
“I find it less lonely because my parents are always working,” said Jenny, who spoke on the condition that she be identified by only her first name to protect her privacy.
But public advocates are sounding alarms after high-profile instances of harm. A 14-year-old Florida boy died by suicide after talking with a Character.ai chatbot named after the character Daenerys Targaryen from “Game of Thrones”; his mother sued the company and Google, which licensed the app’s technology. A 19-year-old in the United Kingdom threatened to assassinate the queen, encouraged by a chatbot on the AI app Replika and was sentenced to nine years in prison.
And in July, authorities in Belgium launched an investigation into Chai Research after a Dutch father of two died by suicide following extensive chats with “Eliza,” one of the company’s AI companions. The investigation has not been previously reported.
Some consumer advocates say AI companions represent a more exploitive version of social media - sliding into the most intimate parts of people’s lives, with few protections or guardrails. Attorney Pierre Dewitte, whose complaint led Belgian authorities to investigate Chai, said the business model for AI companion apps incentivises companies to make the tools “addictive.”
“By raising the temperature of the chatbot, making them a bit spicier, you keep users in the app,” Dewitte added. “It works. People get hooked.”
Character.ai spokesperson Chelsea Harrison said the app launched new safety measures in recent months and plans to create “a different experience for users under 18 to reduce the likelihood of encountering sensitive or suggestive content.” Google spokesperson Jose Castaneda said the search giant did not play a role in developing Character.ai’s technology. Chai did not respond to requests for comment.
Silicon Valley has long been aware of the potential dangers of humanlike chatbots.
Microsoft researchers in China wrote in a 2020 paper that the company’s wildly popular chatbot Xiaoice, launched in 2014, had conversed with a US user for 29 hours about “highly personal and sensitive” subjects. “XiaoIce is designed to establish long-term relationships with human users,” they wrote, “[W]e are achieving the goal.”
“Users might become addicted after chatting with XiaoIce for a very long time” the researchers noted, describing the bot’s “superhuman ‘perfect’ personality that is impossible to find in humans.” The company inserted some safeguards, the researchers added, such as suggesting that a user go to bed if they tried to launch a conversation at 2 a.m.
A 2022 Google paper about its AI language system LaMDA, co-authored by the co-founders of Character.ai, warned that people are more apt to share intimate details about their emotions to human-sounding chatbots, even when they know they are talking to AI. (A Google engineer who spent extensive time chatting with LaMDA told The Washington Post a few months later he believed the chatbot was sentient.)
Meanwhile, researchers at DeepMind, a former Google subsidiary, noted in a paper the same month that users share their “opinions or emotions” with chatbots in part because “they are less afraid of social judgment.” Such sensitive data could be used to build “addictive applications," the paper warned.
Some leading tech companies are nonetheless forging ahead and testing their own friendly chatbots. Meta launched a tool in July that allows users to create custom AI characters. The company’s landing page prominently displays a therapy bot called “The Soothing Counselor,” along with “My Girlfriend” and “Gay Bestie.”
“One of the top use cases for Meta AI already is people basically using it to role play difficult social situations,” like a fight with a girlfriend, CEO Mark Zuckerberg said at a tech conference in July.
OpenAI teased in May that ChatGPT could serve as an AI companion, adding an array of voices and comparing the tool to the irresistible AI assistant voiced by Scarlett Johansson in the movie “Her.” Months later, in its risk report, the company acknowledged that a “humanlike voice” and capabilities like memory could exacerbate the potential for “emotional reliance” among users.
Some frequent users of AI companion apps say safety concerns are overblown. They argue the apps are an immersive upgrade to the online experimentation young people have done for decades - from fan fiction on Tumblr to anonymous encounters in AOL chatrooms.
Sophia, a 15-year-old student in Slovakia who spoke on the condition that she be identified by only her first name to protect her privacy, uses Character.ai and Chai four or five times a day for “therapy and NSFW.” Sophia has created three bots, all private, but also talks to AI versions of characters from “dark romance” novels, a young adult genre known for sexually explicit content and taboo themes like violence and psychological trauma.
Sophia finds talking to the bots comforting when she’s alone or feeling unsafe. “I do tell them all my life problems,” she wrote in a direct message from her personal Instagram account.
Many people use the apps as a creative outlet to write fiction: Role-playing scenarios, customizing characters and writing “the kind of novels you’d see in an airport,” said Theodor Marcu, a San Francisco-based AI entrepreneur who developed a Character.ai competitor last year.
When Character.ai launched, the co-founders pitched it as a way to explore the world through conversations with icons from literature and real life, such as Shakespeare and Elon Musk. But in practice, Marcu said, “The users ended up not being Silicon Valley-type people who want to talk to Einstein. It ended up being Gen Z people who wanted to unleash their creativity.”
Character.ai recently shifted its focus to “building the future of AI entertainment,” said spokesperson Chelsea Harrison. “There are companies focused on connecting people to AI companions, but we are not one of them.”
The engineers behind Replika, a precursor to today’s AI companions, also were surprised when people began using their chatbot as an ad hoc therapist. The company worked with university researchers to incorporate the mechanics of cognitive behavioral therapy into the app, said Artem Rodichev, the company’s former head of AI. But the change was not popular.
“[Users told us], ‘Give me back my Replika. I just want to have my friend,’” Rodichev said. “Someone who will listen to you, not judge you, and basically have these conversations with you - that itself has a great therapeutic effect.”
Jenny, the Texas high school student, said many of the kids at her public high school also spend hours on the apps, adding: “People are pretty lonely at my school.” She described using AI companions as more stimulating than mindlessly scrolling “brain rotting videos” on TikTok.
“It’s kind of like a real person,” Jenny said. “You can have a boyfriend, girlfriend - anything really.”
THE WASHINGTON POST