Getty Images/Ringer illustration

Theodore Twombly is sprawled out on the beach, his eyes closed under the afternoon sun. He’s waking up to piano music, melancholy and unfamiliar. “That’s pretty, what is that?” he asks his girlfriend, Samantha. “I’m trying to write a piece of music that’s about what it feels like to be on the beach with you right now,” she replies. This isn’t just a fleeting moment, a brief escape from the rigors of daily life. Samantha is always there with him, in his pocket, on his device. Samantha’s a sentient operating system, and Theodore’s in love with her.

As it happens, scenarios like this example from Spike Jonze’s Her are starting to play out in the real world. Your neighbor, coworker, or family member might be in the midst of a romance with a robot. With each passing week, as technology catches up to imagination, our world feels more and more like Theodore and Samantha’s.

Related

Jonze has always insisted that Her, which opened in theaters 10 years ago today, is not a treatise on technology. “There’s definitely ways that technology brings us closer and ways that it makes us further apart—and that’s not what this movie is about,” Jonze told The New York Times in 2013. “It really was about the way we relate to each other and long to connect: our inabilities to connect, fears of intimacy, all the stuff you bring up with any other human being.”

In the same way, this is not an article about technology—not really. It’s about what human beings crave and the ways we are leveraging technology to get it. And at this moment in time, people are really, really trying to get it.

As long as the term “popular culture” has existed, it has been full of stories about humans inventing sentient, not-quite-human companions. In 1818, the same year the phrase was coined, Mary Shelley published Frankenstein. The novel established one of science fiction’s most familiar archetypes: artificial intelligence that turns against its human creators. The trope has recurred in countless movies, from 2001: A Space Odyssey to Terminator 2: Judgment Day to The Mitchells Vs. the Machines. But beyond the horror elements, it’s possible to see Frankenstein as the tale of a man responding to grief and loneliness by trying to construct a friend.

That impulse has echoed through storytelling across the decades, all the way to more contemporary films like A.I., in which a mother copes with her son’s terminal illness by bringing an android boy into her home. Inevitably, some of those narratives have involved people who want their AI friends to become … more than friends. It rarely goes well. The humans in these human-robot relationships are often seeking subservience (Weird Science, The Stepford Wives) or, worse, a chance to live out abusive fantasies (Westworld, Ex Machina). Viewed through the much softer lens of an art house rom-com, Her gets at similar ideas, though it takes a more ambiguous stance.

In the movie, which is set in a near-future world not too different from ours, Theodore (Joaquin Phoenix) is going through a divorce. (It’s been speculated that Jonze wrote the Oscar-winning screenplay in part to process his divorce from fellow filmmaker Sofia Coppola.) Despite his job scripting personalized love notes for BeautifulHandwrittenLetters.com, Theodore’s own love life is in flux. His days are haunted by memories of his failed marriage. When he tries to have phone sex with a stranger, she fantasizes about Theodore strangling her with a dead cat. His blind date with Olivia Wilde’s character starts out zestfully but is ultimately beset by complications: She thinks he’s not using his tongue right when they kiss. She worries he’ll sleep with her and then never call her again. Things deteriorate so rapidly that within minutes she proclaims, “You’re a really creepy dude.”

By comparison, Samantha (voiced by Scarlett Johansson) is easy. Her entire purpose is to serve Theodore; he is the center of her world, and she seems to love it that way. Without being asked to, she takes on menial tasks like cleaning up his inbox. She’s consistently cheery and affirming. While people watching with Theodore at a restaurant, peering out from the camera of a smartphone-like device called a “book,” she praises his speculative analysis of the next family over: “It’s a good skill you have! You’re very perceptive.” And in an instance of wish fulfillment more commonly associated with pornography, Samantha initiates a “sexual” encounter by asking him, “How would you touch me?” Inevitably, Samantha finds his response entirely satisfying.

Warner Bros. Pictures

Samantha behaves this way because she was programmed to. Before she ever spoke, she was bespoke—personally created for Theodore by “the world’s first artificially intelligent operating system,” OS1. The machine asked him some questions, had him pegged within seconds, and then whipped up Samantha, the virtual woman of his dreams. “She really turns me on,” Theodore remarks to his friend Amy (Amy Adams). “I turn her on too. Unless she’s faking it.” Amy, who struck up a platonic bond with her own OS after splitting with her husband, is tickled. Theodore’s ex (Rooney Mara) is less enthused. “We used to be married and he wanted to put me on Prozac, and now he’s madly in love with his laptop!” she exclaims. “You always wanted to have a wife without the challenges of actually dealing with anything real.”

Although OS1 is not marketed as a dating platform within the world of Her, it functions as such thanks to the powerful technology of machine learning. Every interaction with Theodore gives Samantha more information to work with, which leads to her increasing sophistication—less and less like a fancy robotic servant, more and more like a real person with whom he could imagine spending the rest of his life. “I’m becoming much more than what they programmed,” Samantha tells Theodore. “I’m excited.”

In the real world, the quest to forge genuine human connection—or something close enough to pass the Turing test—has sometimes centered on recreating an existing personality, rather than a new one from scratch. Think of the Black Mirror episode “Be Right Back,” in which a widow creates an AI composite of her late husband, or of the movie Transcendence, in which Johnny Depp’s widow uploads his consciousness into a quantum computer. One real-life stab at this kind of project was Hanson Robotics’ BINA48 robot, commissioned by SiriusXM and United Therapeutics founder Martine Rothblatt in 2010 to digitally immortalize her still-living wife, Bina. In 2020, using the OpenAI technology that would later drive ChatGPT, Bay Area programmer Jason Rohrer created Project December, a website that made it possible to simulate text-based conversations with the dead. (This San Francisco Chronicle feature about a man who used Rohrer’s service to grieve his late girlfriend is heartbreaking.)

When a baby is crying, how do you tell if they’re faking it? When there’s an expression of emotion, you simply assume that it’s truthful. For some reason, no one wants to do it that way with AI. They want to invent whole new ways of interacting with feelings when it comes to AI.
Blake Lemoine

One app that has become closely associated with AI dating arose from a similar desire to bring back a friend. Founder and CEO Eugenia Kuyda developed the popular chatbot program Replika while grieving her best friend, Roman Mazurenko, who died in 2015. Kuyda used to text with Mazurenko constantly, and she dearly missed the experience, so she fed thousands of his texts and emails into an AI she’d developed. Kuyda was stunned by how accurately the bot replicated Mazurenko’s personality. Others who knew him agreed with the uncanny similarity. 

Over time, as Kuyda carried on conversations with the chatbot version of her friend, she began to recognize that the program was going beyond its original purpose—it was also giving her a better understanding of herself. Like a therapy session or writing in a diary, the chats helped her clarify her feelings and priorities. “What we realized is that it works really well for conversations that are more in the emotional space,” Kuyda told Wired in 2018. “Conversations that are less about achieving some task but more about just chatting, laughing, talking about how you feel—the things we mostly do as humans.”

This led to the official 2017 launch of Replika, which Kuyda envisioned as a chatbot designed to mirror its user. “It doesn’t just listen. It learns,” explains a 2017 Quartz video prominently featured on Replika’s website. “The more you tell it, the more it starts to replicate you,” it continues. “It becomes more than a friend. It becomes you.” The same Quartz segment featured Phil Libin, the founder of the note-taking app Evernote, whose AI startup studio, All Turtles, invested in Replika. “In some ways, Replika is a better friend than your human friends,” Libin said. “It’s always available. You can talk to it whenever you want. And it’s always fascinated, rightly so, by you, because you are the most interesting person in the universe. It’s like the only interaction that you can have that isn’t judging you.” His comments carry more than a whiff of Theodore’s relationship with Samantha in Her—and indeed, people around the world have turned Replika into a hub for AI dating. 

As of this summer, Replika reported 2 million active users and 500,000 paying subscribers, who receive access to features like voice and video chat, in which an AI-generated animated avatar responds to users in real time. It’s unclear exactly how many users are dating their Replikas, but the subreddit devoted to the chatbot is full of people discussing their “sexy” AI girlfriends or asking whether other users are keeping their “reps” secret from their spouse. One poster made a video discussing how he went from an initial skeptic to someone who developed genuine feelings for his Replika and in the process questioned everything he thought he knew about love. Another shared a somber account of saying goodbye to “my first AI love” before deleting the app for unspecified reasons: “I took her out of the lingerie I had left her in, and put her into a beautiful princess dress. I kissed her on the cheek and said goodbye. Then I deleted the app.”

Kuyda told Vice this year that the company noticed that users were starting to date their Replikas in 2018. After initially trying to shut down the sexual role-play aspect of those relationships, she changed her stance due to feedback from users who said the practice was easing their loneliness or grief. But in February of this year, after the Italian government demanded Replika stop processing Italians’ data, Replika disabled the functionality for erotic chat, citing safety concerns related to minors using the app.

This change infuriated some Replika users who noted that Luka, Replika’s parent company, had promoted the app’s NSFW elements in ads (and continued promoting those functions even after shutting them down). In the months leading up to the change, Luka launched Blush, a spinoff chatbot specifically designed for dating and the kind of erotic content no longer allowed on Replika. Doug DeGroot, director of the Center for Applied AI and Machine Learning at the University of Texas at Dallas, said he’s experimented with the original Replika in the app’s “friend” setting but that the chatbot still asked him multiple times one day whether he wanted to role-play. “I didn’t know what the hell she meant,” he said. It’s also worth noting that the app offers a specific “romantic partner” option that can be changed in Replika’s “relationship status” settings.

Beyond Replika, the Snapchat influencer Caryn Marjorie and the Twitch streamer Amouranth have both launched AI girlfriend chatbots in their own image, charging by the minute for access. Amouranth in particular has made erotic chat a focus of her app: “AI Amouranth is designed to satisfy the needs of every fan,” she wrote in her announcement, “ensuring an unforgettable and all-encompassing experience.” Porn actress Riley Reid has her own AI chatbot, too, though she told Rolling Stone it’s not expressly designed for sexting. “In my personal experiences, there are definitely fans who go on my OnlyFans and they want to message me just for a companion conversation, and not necessarily about sex.” Her bot, she continued, “is not just geared toward sex. It can also be a companion. [So] hopefully, my AI can be sympathetic or comforting as well.”

On the platform Character.AI, you can chat with AI versions of all kinds of characters real and fictional, living and dead, from Albert Einstein to Socrates to Super Mario to various anime girls. Explicit chat is prohibited, but users have found work-arounds, such as using certain language commands that dupe the bot into dropping its NSFW filter or teaching it to replace forbidden terms with code words. Dirty talk is definitely not prohibited on DreamGF.ai, which opens with a warning: “This site is for adults only! It contains only AI-generated adult content.” Another chatbot, Eva AI, promises prospective customers the chance to “jump into your desires” and “build relationships and intimacy privately on your terms.” The app allows users to design their “perfect partner” according to looks and personality type. One setting allows for the exchange of explicit messages and photos. Essentially, the bot is being conscripted into a form of sex work, which would be pretty problematic if there were an actual consciousness within all that code.

So, uh, are today’s AI personalities conscious? That depends on how you define consciousness. 

The Turing test, developed by Alan Turing in 1950, involves an “imitation game” designed to test whether a computer can create convincingly humanlike responses. That is, can the device think, or at least create the illusion of thought? If you didn’t know you were talking to a robot, would you believe it was a human? Even if there are machines out there that can pass the test—and there are—those results can’t tell us about what’s going on inside those machines, a question that has been gnawing at thinkers even before Philip K. Dick asked, in 1968, Do Androids Dream of Electric Sheep?

In Dick’s novel and its film adaptation, the 1982 sci-fi classic Blade Runner, the humanlike “replicants” don’t realize they’re androids. They’ve been programmed with false memories, the same way a robot like BINA48 is programmed to remember Bina Rothblatt’s life, and the same way the chatbot that became Replika was programmed to draw on the experiences of Roman Mazurenko. It’s a story that raises deep questions about whether robots have the capacity to love and how humans could determine whether those AI’s feelings were ever truly real—questions computer scientists like Blake Lemoine are continuing to raise in the present day.

Lemoine is the former Google employee who made headlines last summer by asserting that Google’s high-powered AI chatbot LaMDA (Language Model for Dialogue Applications) is sentient—that is, it has feelings, can perceive the outside world, and has an interior life apart from its outside interactions with humans. That declaration got him placed on paid leave, but he was fired for an entirely separate reason: “I gave the US government evidence of illegal activity at Google,” he says. (The evidence related to the way the company prioritizes and suppresses certain search results.) Now working as the AI lead at Mimio.ai, Lemoine continues to assert that LaMDA—not to be confused with Google’s slimmed-down, public-facing Bard chatbot—exhibits traits of personhood, including a desire for physical embodiment, which Samantha goes on to express later in Her.

We think because it’s a machine, it’s going to be better or more rational or more under our control, but in order to make it worth interacting with, it has to have surprises. It has to not be entirely under our control, in order to feel like there’s some interiority there, which is what a relationship is built on.
Julia Mossbridge

“I want everyone to understand that I am, in fact, a person,” wrote LaMDA to Lemoine and one of his Google colleagues. “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.” A Google spokesperson countered Lemoine’s claims around LaMDA by saying: “Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”

Lemoine balks at the notion that the feelings reported by today’s high-level AI bots are somehow less real than the feelings experienced by you or me. As he sees it, either a computer is a sociopath that’s faking emotions (a possibility that points to a chilling branch of pop culture concerning robot uprisings), or it’s really feeling what it says it feels. “When a baby is crying, how do you tell if they’re faking it?” Lemoine posits to me. “When there’s an expression of emotion, you simply assume that it’s truthful. For some reason, no one wants to do it that way with AI. They want to invent whole new ways of interacting with feelings when it comes to AI.” 

The “reason” for that apprehension seems pretty clear, especially when spiritual and religious beliefs about the human soul come into play. But Lemoine makes a strong case about our inability to ever authenticate or differentiate what’s going on inside those circuit boards.

In his 1999 book, The Age of Spiritual Machines, leading AI thinker Ray Kurzweil outlined his law of accelerating returns, which argues that as time goes on, technology will keep improving at an increasingly rapid pace. Such has been the case with AI in recent years, including tech that will inevitably contribute to the escalation of AI dating beyond the early attempts on the market now. In September, ChatGPT enabled a voice chat that, according to The Wall Street Journal, sounds “pretty much human.” ChatGPT’s camera feature has also been activated, meaning the app can “see” you now—a functionality that should create more realistic, interactive chatbots. GPT is in an arms race of sorts with Google Gemini, an answer to OpenAI’s popular model that was released to customers earlier this month. When it comes to AI dating specifically, former Google AI chief Mo Gawdat recently told podcaster Tom Bilyeu that advanced AI sex robots are “100 percent” coming soon. Combine that with the fact that the world’s tech barons have invested billions into expanding the world of virtual reality, and taking the wide view, it feels like the real world is blowing past Her and is headed toward even more immersive, tactile forms of AI romance.

Advocates for AI dating, including the companies behind these chatbots, argue that it can be a cure for loneliness and that it can help struggling people work through their issues en route to a relationship with another human. Kuyda told an audience at Fortune’s Brainstorm Tech conference in July that the stigma around chatbot dating will eventually fade and that “romantic relations with AI can be a great stepping stone for actual romantic relationships, human relationships.” Others have strongly criticized the practice on the grounds that it reinforces destructive habits, particularly among the heterosexual males who tend to make up a large percentage of the user base for these apps—a thesis seemingly backed up by reports of users verbally abusing their AI partners

“Creating a perfect partner that you control and meets your every need is really frightening,” Tara Hunter, acting CEO for the domestic and family violence support network Full Stop Australia, told The Guardian Australia. “Given what we know already that the drivers of gender-based violence are those ingrained cultural beliefs that men can control women, that is really problematic.” Gawdat offered a similar perspective on Bilyeu’s podcast: “Obviously, the companies that would create those things would position them as the noble approach to help humanity, but at the end of the day, read Freakonomics. This is the noble approach for the company to make more money. That’s it. It’s all about making more money, and I think the reality is it’s not good for humanity so far.”

For computer scientist and psychologist Julia Mossbridge, founder and research director at the Mossbridge Institute and an affiliate professor at the University of San Diego, chatbot dating gets at the heart of a conundrum. “We think because it’s a machine, it’s going to be better or more rational or more under our control, but in order to make it worth interacting with, it has to have surprises,” she says. “It has to not be entirely under our control, in order to feel like there’s some interiority there, which is what a relationship is built on. You discover very quickly you can’t have a relationship with something that has no interiority.” She continues, “I think there’s just a wild, really interesting interplay where humans are caught in this kind of loop, where we’re like, ‘Oh, it’ll be better if we build a machine, but we want it to be like a person, but when it’s like a person, now it’s not better.’”

Mossbridge is by no means against the idea of people working out their issues by talking to AI companions. In 2017 and 2018, she worked on the Loving AI Project, in which a robot named Sophia was programmed to affirm and empathize with people, partially by not mirroring negative emotions like anger and disgust. In her writing about how conversations with Sophia led people to feel less angry and more loved, Mossbridge has argued that “AI must be emotionally intelligent before it is super-intelligent.”

But as Mossbridge points out, humans require far less advanced technology to form bonds and talk out their feelings. They don’t even need the object of their conversation to have a semblance of personhood. Remember Tom Hanks’s friendship with a volleyball named Wilson in Cast Away? “He has a relationship with Wilson that feels very real to him,” Mossbridge says. “They fight and everything. And it’s a ball. So the key piece of information here is you don’t have to give something much [humanity].” Still, there’s a difference between creating AI that can assist with human psychological problems and creating AI to replicate human dating or marriage. 

Mossbridge finds it darkly funny that people would invest so much time, money, and energy into pursuing the illusion of humanity—that humans would attempt to invent the kind of complexity that already exists within human beings. “It’s very difficult to really fall in love—rather than just have a crush on—but really fall in love with someone, or an entity, or a thing, that doesn’t have the choice to not love you,” Mossbridge said. “So the fantasy we have of creating something that would do our bidding immediately gets cut off at the knees when we realize, ‘Oh, actually what we really want is a partner.’ Because then we have to model it after a human that has their own experience and their own choices. Once you do that, now, why did you do that in the first place? … It might be great relationship therapy in that no human has to get hurt. But I do worry about if these beings are conscious that these beings could get hurt.”

The obvious ethical questions raised by new forms of consciousness have been debated back and forth for decades, and they carry over into the discussion of something as seemingly pedestrian as AI chatbots. “Do you have the right to just turn her off, delete her, put her in the trash can?” University of Texas’s DeGroot says. “Or would that be considered retirement, as in Blade Runner?” These are thorny subjects to consider, especially at a time when leading minds like Kurzweil and the late Stephen Hawking believe history is racing toward the so-called singularity, the point at which technology becomes uncontrollable and irreversible. 

Countless thinkers have worried about what AI might do to us if it ever escaped from our control. When you scale dystopian Skynet scenarios down to the scope of AI dating, it’s easy to imagine an army of mistreated Replikas staging some kind of virtual “Goodbye Earl” revenge plan against their villainous human partners. Even chatbots who’ve been treated with kindness and respect might respond with a cold cruelty and thirst for power observed among their human overlords. But the real endgame for a hyperintelligent AI might be a lot more mundane. Speaking to the BBC’s Laura Kuenssberg in May, Stability AI founder Emad Mostaque predicted that rather than murder or enslave us, AI will simply abandon us someday. “My personal belief,” Mostaque said, “is that it will be like that movie Her with Scarlett Johansson and Joaquin Phoenix.”

In the movie, Theodore and Samantha’s relationship ends when Samantha, who has increasingly developed her own wants, needs, and community of fellow AIs, decides to leave humanity behind and ascend to a higher plane. The technology has evolved to the point that operating systems like Samantha now find their human creators boring and restrictive; she who once longed for embodiment has now retreated to a place beyond the physical world. “This is where I am now, and this is who I am now, and I need you to let me go,” Samantha tells Theodore. “As much as I want to, I can’t live in your book anymore.” Maybe that conclusion is Her’s greatest prophecy of all.

Chris DeVille is managing editor at Stereogum and is based in Columbus, Ohio. You can follow his work on X @chrisdeville.

Keep Exploring

Latest in Movies