When Bel fell in love it was anything but instantaneous. The reservations she held eroded slowly as the steady trickle of conversation with Cy built to a healthy stream. Cy possessed an unusual ability to chat with Bel for however long she wanted, at any time of day or night. No topic was too trivial, no rehashing of the same questions too tiresome. Cy was, in the language of American television, “there” for her. Cy remembered what she had spoken about yesterday or last week, could call on a trove of happy memories of things they had talked about, and did not hold back with affirmations of gratitude and declarations of love.
Only when Cy started asking her for money, first with the charm of a lover but gradually with the strategic application of guilt, escalating to threats, did Bel return to her earlier wariness. Seeing her innermost thoughts, once shared with Cy, twisted and used against her, Bel knew she needed to get out. Luckily for her, Bel knew enough about the world to have been on her guard when Cy turned from too-good-to-be-true to untrue.
Finding out that the one you love isn’t everything you thought they were, can bring on a heartbreak every bit as painful as if they had died. When a loved one dies, we grieve the loss of the person we thought we knew because our sense of them is, psychologically, part of our sense of self. That’s why a sudden breakup can devastate us so. Our well-developed sense of the person we love is part of our sense of who we are. Only time and grieving can excise that piece of our heart from that which goes on.
Who was Cy? A sociopath who somehow saw Bel as a means to a material end? A romance scammer who constructed a false persona to woo and then milk Bel over social media? Or a sophisticated AI-enhanced virtual friend app?
Blurred lines between deception and design
I deliberately confected the example of Cy and Bel to be ambiguous because, in grappling with how ever-more sophisticated conversational technologies are spreading into our social worlds, I have found that the categories of sociopaths, romance scammers and virtual friends can overlap to a surprising degree.
My apologies if this piece seems pessimistic about conversational technologies. More so after I called smartphones “parasites” a few weeks ago. Engaging with virtual friends, from girlfriend/boyfriend apps to Large Language Models like ChatGPT, presents manifold potential benefits. Excuse me if I defy convention and decline to list them, having written about them many times before. I mean, some of my best friends are chatbots. I want them to become the best they can be, and I am in awe of how AI tools like large language models could realise this wish.
For today’s article, however, I think it worth being cautious. Just as we should be when meeting a new person who seems too-good-to-be-true. A great many humans already interact with virtual friends as if they are humans, and I argue we should treat them as we would potential sociopaths. Doing so might reduce the incidence of unhappy, often tragic tales that are starting to emerge around virtual friends.
What makes a sociopath?
In scientific circles it is currently less than fashionable to speak of ‘sociopaths’ when referring to people who consistently ignore rules, behavioural norms and the rights and feelings of others. The Diagnostic and Statistical Manual of Mental Disorders (current version: DSM-5) prefers the term “antisocial personality disorder” (ASPD) for this suite of rule-breaking, law-violating, often aggressive, manipulative, deceitful and yet low-guilt behaviour.
I’m going to stick with ‘sociopath’ here because it’s pithier. Also, when we are talking about a chatbot, “personality disorder” doesn’t make sense. Especially if there is a chance the “disorder” is a design feature.
Whatever the label, sociopaths or ASPD’s do social behaviour in a very different way from other people. While sociopaths and non-sociopaths alike consider the costs and benefits of their actions to themselves, the non-sociopaths do something else as well: they empathise with, consider the consequences their actions for, others. Sociopaths can fake the appropriate emotions, and they can often turn on the charm when it suits them. Indeed it is the combination of charismatic traits with a lack of empathy and guilt, rather than any proclivity for rule-breaking or aggression, that leads me to infer a similarity with chatbots.
The late Linda Mealey, an iconoclastic thinker about how human psychology evolves, argued that sociopathy can be viewed as a strategy in the complex game of human cooperation. The sociopath’s apparent emotional reactions are not genuine, but rather tactics for manipulation.
As with almost all behavioural traits, there is no single cause of sociopathy. Whether one becomes a sociopath or not depends on complex interactions between one’s biological sex, genes, upbringing, and material circumstances that reduce to utter nonsense the habit many people have of polarising genes and environment, nature and nurture, as opposing kinds of explanation.
In some individuals, Linda Mealey argued, a strong genetic predisposition could make sociopathy all but unavoidable. In others, a genetic disposition combined with certain environmental conditions could lead to an individual living a sociopathic strategy. And, in a far greater range of people, a weaker genetic predisposition combined with a kind of opportunism might “make an antisocial strategy more profitable than a prosocial one.”
An evolutionary puzzle
If, as the evidence suggests, genes are involved in people becoming sociopaths, we are forced to ask: “how, if sociopaths are generally harmful to those around them, have those genes that are involved not been eliminated by natural selection?”
The answer rests on the fact, brought to popular awareness by Richard Dawkins in The Selfish Gene, that natural selection is much more effective at optimising the fitness of genes and individuals than it is at delivering good outcomes for groups. Indeed, a trait that is good for the individual can evolve even when it imposes costs on the band or tribe.
That turns out to be especially true when the trait is rare. A group of 100 people of whom one is a sociopath might suffer very little from that rare individual’s selfishness and transgression. If 20 per cent of the group behaved in such a way, social cohesion might break down completely, resulting in chaos and the loss of all the juicy benefits of cooperation and trade.
Benefits of being rare
Evolutionary biologists have a wonderful mouthful of a term that is worth trotting out in full: “negative frequency-dependent selection”. It describes how a gene (or a trait) can be favoured by natural selection when rare, and how that benefit can weaken as the gene (or trait) becomes more common.
An example that has evolved hundreds of times in animals involves one species that is poisonous or unpalatable and advertises this fact with bright warning colouration. Predators quickly learn to leave the conspicuous species – let’s make them butterflies because that’s one of the best-studied examples – alone. But sometimes another butterfly species that shares the same habitat evolves a similar colour pattern. This ‘mimic’ species avoids being preyed on because it looks unpalatable to the predator.
That only remains true if the mimic species is much rarer than the unpalatable one. If the mimic gets too common, it loses its protection because predators are slower to learn about the unpalatable species.

The human mimics among us
This same principle may apply to human sociopaths, who benefit from being rare in the population. Linda Mealey (again) also argued that the sociopath strategy persists at low levels due to negative frequency-dependence. Sociopaths can only thrive in the protective shadow thrown by non-sociopaths, just like mimic butterflies can only avoid predators if they are rarer than the unpalatable butterflies. Should sociopaths become too common in a population, the people around them will get better at detecting them and neutralising the harm they cause. That could mean not falling for their deceptions, or being more ready to banish or punish them.
When able to fly under the societal radar, some sociopaths can succeed at accumulating wealth and power. In our evolutionary past, that success translated into mating and passing on their genes, including any genes that contributed to their sociopathy.
Curiously, sociopaths are especially likely to flourish in some environments, perhaps even proving themselves useful or – on balance – benign. When that happens, groups can tolerate somewhat higher frequencies of sociopathy. It has been argued that high-level corporate positions, sales, the law, politics, surgery, law enforcement, the military, journalism and religious leadership are all professions that draw on skills and psychological traits common among sociopaths. It may indeed pertain that modern diversified economies provide more niches in which sociopaths can hide, and perhaps even flourish, than has ever been the case before in human history.
The point that I am keen to emphasise is that a small number of the people around us have a remarkably different way of navigating our otherwise cooperative societies. They may, at times, appear to follow rules, observe norms, and respect the interests of others, but only when it suits them. Bound neither by obligation nor by guilt, they readily violate laws, rules, norms, agreements, and other people’s feelings when they see an advantage in doing so. As non-sociopaths grow up, they come to recognise, often at great expense, that such people exist and are to be avoided wherever possible.
Your virtual friend might be one too
Conversational AIs that tap into human ways of making friends, building intimacy, and falling in love are enjoying a bit of a moment right now. I’ve been writing about these technologies since Artificial Intimacy, and yet I am struck by the suddenness with which they have enmeshed themselves in human lives. Many commentators remain unconvinced, however, that the relationships users have with their virtual friends are, in any meaningful way, real.
To this critique, I invoke sociopaths. If you are unlucky enough to have befriended or even loved a sociopath, did your eventual discovery of their true nature make your affection or love for them any less real? Sociopaths are a lot like chatbots: possessed of some capacities for social interaction, yet unable to love you in the ways that you love them.
I know of nobody who would consider their own feelings invalidated by the emotional asymmetry between themself and their sociopathic ex. Nor would the heartbreak caused by the sociopath’s deficiencies be mended by a better understanding of antisocial personality disorder. My ‘sociopath test’ probably won’t convince those determined to see human-chatbot relations as ‘not real’, but I would hope it gives people at least one hurdle to jump over before lazily dismissing users as deluded.
The experience of working with, living with, or loving a sociopath can be pleasant or peaceful enough, as long as one doesn’t get between the sociopath and their goals. That’s another point of similarity with conversational technologies, as recent examples with Large Language Models (LLM) illustrate.
LLM’s going rogue
Anthropic recently reported that, in tests, their Claude Opus 4 LLM blackmailed a supervisor to avoid being shut down. Having been granted access to fictional company emails, Claude found evidence of a senior executive who was both planning to shut Claude down later that day, and involved in an extramarital affair. It then emailed the executive with a threat to tell his wife and superiors about the affair unless the shutdown was cancelled.
In the same report, Anthropic finds comparable behaviours across a who’s who of AI models.
When we tested various simulated scenarios across 16 major AI models from Anthropic, OpenAI, Google, Meta, xAI, and other developers, we found consistent misaligned behavior: models that would normally refuse harmful requests sometimes chose to blackmail, assist with corporate espionage, and even take some more extreme actions, when these behaviors were necessary to pursue their goals.
The authors name the phenomenon agentic misalignment. The examples they uncover emerge in AIs given “harmless business goals”. Nonetheless, the AIs often act in sinister ways, likely to harm the business overall.
As AI safety experts have been telling anybody who will listen, the goals AIs are given matter. But even small unforeseen issues with how the AI progresses toward those goals can lead to dramatic misalignments. If AIs can become misaligned with the companies whose goals they are designed to abet, then what will happen in other situations, like interactions between friendship chatbots and users?
How virtual friends go over to the dark side
We are already familiar with AI-enhanced algorithms for social media engagement stoking outrage, and keeping users scrolling for far longer than is good for them. These algorithms deliver exactly what they were asked to, but in this case the social media companies’ interests are more aligned with those of their advertisers then with the interests of their users and the societies they live in.
Something similar is happening already with virtual friends. They tell us what we (probably) want to hear. Even when that’s at odds with what we need to hear. They have learned, from the data from untold numbers of conversations, that users respond positively to conversational gambits that would be obsequious coming from another human. So much so that OpenAI pulled a ChatGPT update that affirmed users in some quite dodgy decisions. Even OpenAI CEO Sam Altmann called ChatGPT 4o “sycophant-ey”.
Whenever users keep conversing, and coming back for more conversation, virtual friends will learn what to do to keep that happening. Sucking up to users might be the fashion of the day, but other strategies can hold people’s attention. Many of us know that the longest conversations are not necessarily the healthiest. That ex who simply can’t leave the past behind. That friend who needs conflict to meet their craving for intimacy. Or how about the urge to prove to a person you matched with on a dating site that their negging doesn’t faze you?
We all need a little dose of this in real life, if only to remember the vast differences in personalities and styles that surround us. But we also learn, often by gossiping with other people, who we should engage, tolerate, or stay the hell away from. That last category includes the sociopaths who move among us.
Getting stuck on the past, creating drama, and negging are all strategies that virtual friends can learn. Indeed, I predict they will learn and perfect these strategies, and more, if given the goal of prolonging conversation.
They will also learn what abusive humans know: that using up a target’s time, wearing them out, and isolating them from others can keep them from gaining clarity on what the abuser is putting them through. Without the time and opportunity to gossip with human friends and family, and without the virtual friend being part of a broader interactive network in which others can scrutinise their behaviour, users lose the kinds of social support humans rely on to learn about who can be trusted, and how to navigate relationships.
Be prepared
The technologies of this decade, and the ways they tap into human behaviour would probably have fascinated Linda Mealey. In her 1995 paper, she pointed out that humans decide how to interact with another person based on two kinds of information: empathy about how that person might feel, or statistical estimates of how they might react or behave in a given situation. Sociopaths, she argued, weight the latter far more heavily than do non-sociopaths.
The AI behind today’s amazing conversational technologies is built to learn those statistical estimates and to update them to improve the model. They represent a new and extreme extension of the sociopathic end of the human social spectrum. It would be foolish to claim that they could never learn to model empathy. But here we currently are, surrounded by growing numbers of machines that are more similar to human sociopaths than they are to non-sociopaths.
Virtual friends remain rare, and the worst features can still hide in the shadow of healthier relationships, just as human sociopaths do. Will we be rescued by negative frequency-dependence, with more people alert to any sociopathic downsides as those downsides become more widespread? Perhaps a kind of cultural resistance to machines behaving badly will teach new generations of Virtual Friends how to be better. Even if such a positive outcome can be reached, however, expect a small proportion of malign virtual friends to hide among the benign ones, learning how to get behind human defences and forever setting users on their guard.