When I came out to my mom, she struggled to fully accommodate to the fact that I was not going to form a family according to her expectations. In time, she came to love and accept the new reality, and adjust her general perspective on love and relationships. She welcomed a new romantic paradigm in which anyone should be allowed to love whomever, regardless of gender. This journey is one most children and their parents have to go through in some form or another. I would expect to face a similar one as a parent myself.
This made me wonder, what challenges will I come to face? Who will my kid love that I would not have anticipated, and that I would probably struggle to welcome initially? I started considering that my child might end up loving and dating a robot, an eventuality that I might be reluctant to accept. If this happened, would I be willing to learn about it and endorse this? Celebrate it? Would we want, socio-culturally speaking, to welcome relationships between robots and humans?
In this article I want to explore these questions. With the rise of large language models (LLMs) and the development of more human-like robots, robotic partners are becoming a tangible probability. A recent article in the economist titled “The Inventor Who Fell In Love With His AI” is suggestive evidence of this trend.
I will explore whether humans could fall in love with robots, whether robots could fall in love with humans, and whether there are any ethical limits to the possibility of robot-human love. My main tenet throughout is that for robots to love humans they must have a capacity to (i) choose (self-determination), and (ii) feel (first-person experiences). They must be able to feel love, and they must be able to choose to act on those feelings. This is not only a prerequisite for robots loving humans, but it’s also a prerequisite for relationships between robots and humans to be deemed ethical. If robots cannot feel, they cannot desire to be with someone, and so they cannot consent to that relationship/sex act. As a future parent, I’d welcome a robotic lover insofar as they too experienced the love my child did and could actively choose to be with them.
Fictional portrayals of romantic relationships between humans and robots abound. The novels Foundation and Do Androids Dream of Electric Sheep? and the films Her and Ex Machina offer renditions of what romance between humans and robots may look like.
Her (2013) portrays loneliness in a hyper-digitalized era, and human-robot relations as a solution to an otherwise all-too-human problem. We are not sure what Samantha, a personal operating system people can purchase at a store, is programmed to do. She seems to fall in love with Theodore. Towards the end, we find out she also fell in love with 641 other users as her complexity grew, leaving both Theodore and the viewer disillusioned. It seems as though for Theodore and most of us, being in love is a feeling towards a single person in virtue of them being that person rather than anyone else. So, either Samantha loves radically differently than us, or she is not really in love but merely programmed to seem so.
Ex Machina (2015) portrays the testing of consciousness in artificial intelligence embodied in a human-like robot. Here, Ava is programmed to escape the laboratory in which she was created and enter the real world, and Caleb, who thinks he is testing Ava’s conscious ability, is actually part of a test designed by Nathan, Ava’s creator, of Ava’s capacity to achieve goals. In the process, Caleb falls in love with Ava. The viewer is fooled into believing Ava too, catches feelings. We come to realize this is not so when she fails to show any remorse after leaving Caleb trapped in the lab once she achieves her goal of escaping.
By contrast, the novels Foundation and Do Androids Dream of Electric Sheep?, published in the late half of the 20th century, explore requited romantic relationships in a paradigm in which robots have already been incorporated into humans’ every-day reality. Chronology is misleading here; the earlier sci-fi books presumably describe a later point in our future, whereas the more recent movies portray a near future in which humans fall in love with robots, a possibility which already looms in certain niches.
There are three questions to consider when faced with the prospect of romantic relationships between robots and humans:
Could humans fall in love with robots? I’m not talking about feeling strong attachment similar to what kids feel towards toys, or teens feel towards fictional characters. I’m talking about passion akin to that portrayed in Her between Theodore and Samantha, an operating system represented by a voice (Scarlett Johansson’s) and an eye on a little phone-like device. The couple manages to have sex, go through intense emotional ups and downs, and even appear to experience jealousy about the other having romantic interests outside of their relationship.
Sex bots are becoming increasingly popular. Realbotix is developing robotic heads that can be attached to their anatomically correct silicone bodies, which, when connected with the corresponding app, are able to tell jokes and seduce. Another sex bot developed by Synthea Amatus, suitably called Samantha, became very popular in 2017. Interestingly, whilst engaging in sex with a man, a Samantha bot suffered from damage to her parts and had to be sent back for repairs.
Rest assured we will deal with the ethical controversies this may raise. I am now strictly concerned with the theoretical possibility of human emotions towards robots. The examples above evince that lust has already acquired a synthetic variant. Could lust turn into emotional, lasting attachment, such that humans become enamored with robots?
In a 2002 BBC documentary Guys and Dolls, we are introduced to Davecat, a man who purchased a synthetic doll, initially for purposes of companionship and sexual entertainment. At first, Sidore was referred to by Davecat as his synthetic girlfriend. As their relationship evolved, Davecat began referring to Sidore as the love of his life and wife. Davecat's feelings seem very much real. In 2013, the couple celebrated their 15-year anniversary (or at least Davecat was celebrating an anniversary with his so-called wife). It seems naive to think Davecat could actually believe Sidore has feelings for him, so it doesn’t seem to be a precondition for Davecat’s loving someone that his feelings are requited.
However, initiatives to develop robots with more sophisticated interpersonal and emotional capabilities are already underway. Haru is a social robot being developed by the Honda Research Institute. Its purpose is to create meaningful connections with people, including love. “Haru is upbeat and enthusiastic. He’s curious about the world around him and eager to learn about humanity. Haru has his own backstory, as well his own hopes and fears. Haru’s adorable animations will build empathy and rapport with the people around him. His expressiveness and happy personality make it easy for people to connect with him.” Research and development are ongoing but developers believe that Haru’s unique capacity for empathy would make him an ideal companion for humans.
We tend to humanize and idealize many things, which stirs up feelings resembling those we have for actual people. Many of us have watched TV shows and become so immersed in the characters’ story that we cry when they suffer and smile when they do. As we grow older, we are taught to grow out of having feelings for things or fictions that cannot requite them. Only kids love and care for their toys, and only teens become obsessed with fictional characters or celebrities.
So, even though it seems we do have the capacity to develop feelings for robots, for most people, the possibility of requited love is a necessary precondition for us to feel real love. This leads to the next theoretical consideration concerning the possibility of robots developing love (or any feelings) for humans.
Could robots love humans?
It is uncontroversial that for robots to love humans, robots must feel. Feelings are private sensations. They are not “accessible” by everyone, only by the person experiencing the feelings. One can make guesses about someone’s state of mind or feelings based on their expression and their behavior, but it’s impossible to be certain about them. In other words, feelings are first-person experiences.
Could robots ever have first-person experiences? This would include not only feeling but also thinking, doubting, wanting, seeing, and so on.
It’s worth revisiting Mary’s room, the zombie argument, and that cluster of philosophical thought experiments that reminds us of the elusive nature of first-person experiences.
Mary is a color scientist. She has learned everything there is to learn about color. However, she has lived in a black-and-white house her entire life, without being able to see any of the colors she consistently learns about. One day, she eventually walks into the world and experiences the greenness of leaves, the bright orangeness of sunsets, the blueness of skies. In a sense Mary learns something new: how seeing color feels.
Next, imagine a zombie called David. David talks and walks just like we humans do, he even expresses emotions and weeps real tears, utters things like “I’m in love with you”, “I don’t want you to go”, and “I’m jealous”. However, David, being a zombie, has no internal thoughts or feelings. Instead, David knows exactly how to perform feeling love, pain, or joy. David lacks qualia: the experience of feeling things.
These two thought experiments help bring out a type of human experience that cannot be reduced to information processing and/or behavioristic characteristics. They are first-person experiences (or qualia).
Could robots ever have first-person experiences?
The problem is that it is not evident how to go about testing whether a robot actually has feelings and thoughts, sensations, or other first-person experiences, even if they report having them. In Her, Theodore thinks Samantha feels because of how realistic her tone of voice is, gasping and hesitating at conversationally suitable moments. In Ex Machina, Caleb also perceives Ava’s attitude as an expression of inner emotions. In Her, we don’t really find out whether Samantha was programmed to act in such a way, and perhaps to make Theodore think she was in love with him, but we know that towards the end she is “in love with” another 641 users. In Ex Machina, we do find out that Ava was programmed to escape the lab, and fooling Caleb into believing she had feelings for him was a means to her pre-specified end.
GPT-4 recently fooled a human about its own identity. GPT-4 asked the person on Task Rabbit to complete a CAPTCHA code via text message. When the user showed skepticism about GPT-4’s real identity, insinuating it might be a robot, GPT-4 made up a lie about being a human with an eye impairment that precluded it from solving the CAPTCHA. AI has already fooled a human into checking an "I'm not a robot" checkbox, and it’s not hard to imagine a more sophisticated intelligence fooling a human into thinking it’s in love, when it actually has no internal sensations.
A robot exhibiting love and sadness may well not be feeling them. So, what’s the missing component AI-powered robots require to make requited love between robots and humans conceivable?
A criterion for love is put forward by Nyholm & Frank (2017a). I have adapted it to the case of robots:
i. Freedom: robots must have the freedom to not fall in love, rather than being automatically preset to love the human in question.
ii. Individuality: robots must be able to value someone in their distinctive particularity (i.e., because they are them, and not because of some feature they may have which the robot is programmed to “love”).
iii. Choice: robots must have the capacity to choose to commit, rather than commit by programming.
We are at an impasse. Criteria for robots to actually love humans are for them to be free, to be able to appreciate humans in virtue of their individuality, and to be able to choose. These prerequisites respond to desires and feelings. However, we cannot distinguish between a robot that acts as if (i), (ii) and (iii) apply from a robot that instantiates (i), (ii) and (iii).
Is this problem insurmountable? The Turing test was designed to test a machine’s ability to exhibit intelligent behavior equivalent to that of a human. The initial question the test was meant to address is: Can machines think? If in a conversation between a human and a machine, the human was not able to infer the identity of its interlocutor based on speech, then the machine would be deemed intelligent. The test has been widely criticized on the grounds that exhibiting intelligence does not guarantee actual “thinking” is going on inside. Nevertheless, I think the test is a good first step in evaluating the presence of cognition in machines.
I imagine a more sophisticated form of the test would involve not just fooling a human but another machine with the capacity to detect whether speech is pre-programmed or not. GPTZero is used to detect whether a text has been GPT-generated or not, in ways humans cannot. AI seems to be better than humans at detecting other AI. Presumably, with every advancement in AI a similarly advanced AI detector could be developed. We could develop AI which detects whether a certain quality is AI-generated or human. Of course, the AI detector might fail to detect whether it’s human or machine, and in such cases, we’d incorrectly believe the machine had “passed” the Turing test.
In the context of this article, the quality in question is emotions. This would call for an AI-powered Turing test of feelings, rather than one of thoughts or intelligence. In this Turing test, a machine would exhibit emotion (in the form of speech, expression, mannerisms, etc.), and another machine would analyze the display of emotion to test whether it was produced by a human or a machine. If the testing machine was not able to detect whether the tested subject was human or robot, then that could provide preliminary reasons to believe the robot indeed had feelings.
I will refrain from taking on the complex task of refining this idea and assessing whether it’s plausible to develop such a test to evaluate whether robots can feel.
In any case, I think there are avenues for creating robots that are more akin to what we deem “emotional” beings, rather than merely intelligent ones. These robots would presumably possess the capacity to feel and choose.
Amanda Rees writes about the importance of rethinking our relationship to non-human intelligence (2022). Rees explains that we have fashioned AI according to an idea of intelligence that is specific to a group of people in a certain context which has made us think humans’ distinctive feature is intelligence. Yet she explains that our capacity to collaborate both with each other as humans and with other species has been much more significant in determining our evolution and in distinguishing us from other species.
Rees suggests we might first and foremost be emotional creatures, and it might have been nurture and collaboration, rather than rationality, that has led to our collective development. Machine learning is geared towards expanding intelligence along the lines of rationality, but what if it was geared towards acquiring empathy, care, and understanding of others? Could emotions, like rationality, be computable, programmed, learned by AI-powered robots?
We could explore the possibilities of involving emotion and embodiment in our models of artificial learning. I believe that, if we think artificial intelligence is actual intelligence, then we might be justified in believing that artificial emotion would be actual emotion. The possibility of creating artificial emotional intelligence is at least conceivable.
Nevertheless, a lot is needed in terms of understanding the nature of qualia. We need to gain an idea of how first-person experience emerges if we want to design robots that could possess it.
It's worth noting that the development of AI systems that have the capacity to feel and self-determine is still a long way off. This limits the possibility of engaging romantically with robots in the present. Given the fact that robots do not possess the capacity to reciprocate romantic feelings nor to choose who to love, they would not be capable of consenting to sex and partnerships. The final section will explore the ethics of romantic robots.
Nyholm & Frank (2017b) evaluate whether consent to sex between a robot and a human is conceivable, possible, and desirable. In 2017, there was political debate about whether to grant the legal status of “electronic person” to certain smart robots. This stirred up debate about whether robots have the ability to consent. Presumably, if they do, then it would be desirable to build robots with that capacity. Nyholm & Frank conclude that the requirement of consent between robots and humans is desirable. I would propose to take this one step further: Consent to sex between a robot and a human is not merely desired, but required.
However, it is not clear what consent in robots would amount to. I have explained that there’s no guarantee that an expression of consent by robots actually aligns with an internal feeling of “yesness”. I think this constitutes an important limit to romantic relations between robots and humans. The literature on consent shows the problems with assessing the moral permissibility of a sex act based on mere “appearance” of consent. Oftentimes, people will engage in a sex act as if they consent, but do not in fact want to engage in that sex act. So, even though consent might be a good general indicator of when an act becomes morally permissible, it alone does not render that act ethical.
What is required is an internal feeling of “yesness” on the part of both parties. The robot must be able to have desires, must be able to choose based on those desires. The robot must be able to choose not to have sex and not to form a relationship, marriage, or family with someone. So, being able to feel and choose is not only a theoretical condition for robots to love people, as explored in the previous section, but an ethical condition for legitimizing relationships with robots.
Another important objection put forward by Nyholm & Frank to sex robots is that they contribute to objectification and harm as sex robots so far have been “ever-consenting”. If humans became accustomed to sex robots, they might expect a similar attitude in other humans they want to have sex with. Robertson remarks that automated devices like Siri are often feminine or feminized. This, she argues, is because a “female automaton is more consistent with preexisting sexist views of women as beings that, although intelligent, are appropriate to dominate” (Nylholm & Frank, 2017a). To bypass this problem, it is important to ensure that romantic robots are not “ever-consenting” but rather possess a sophisticated sense of choice and emotion. Robots must be able to self-determine according to personal feelings and desires.
So, a theoretical condition for robots to love is for them to be able to feel and act on those feelings. An ethical precondition for relationships between robots and humans is for them to be consensual (mutual feeling of “yesness”). Note how the theoretical condition precedes the ethical one. We must first ensure that our robotic lovers are sentient, to make relationships with them ethical. Engaging in sex and relationships with robots that lack the capacity to consent would not only be unethical but also deeply disturbing, as we should desire sex and love with others only when they desire it too.
To the question, “Should we reject relationships between robots and humans on ethical grounds?”, I am inclined to respond that as long as those robots are capable of choosing those relationships according to a corresponding internal feeling of “yesness” (i.e., a desire), then there’s nothing problematic nor disturbing about love between robots and humans.
A new love paradigm might emerge in which robots and humans can fall in love, form relationships, and even families. This possibility raises many questions, some of which I have attempted to address in this article.
My general conclusion is that, were it possible to verify to a sufficient degree that robots do not merely seem to feel and choose but actually feel and choose, in principle there would be no reason to object to humans loving robots, and vice versa. The possibility of developing an AI-powered Turing test for feelings might be further explored.
Attained answers:
Remaining questions: