Artificial Intimacy News #5
A field report from the place where 21st Century technology meets human behaviour, culture, and evolution.
In this issue:
A flashback to my fieldwork testing virtual friends
META’s new chatbots - can they be trusted?
Can Baidu’s new AI help us understand our pets?
Your face might predict how you age and how long you live
Insight - Doing Artificial Intimacy Fieldwork
The tail end of my recent Europe trip was a fantastic visit to the Centre for Humans and Machines, part of the Max Planck Institute for Human Development. I gave a seminar on how AI might influence human evolution, and discussed ideas for new projects with some brilliant researchers.
While I was waiting in the airport lounge, beginning my 30 hour, three-flight slog home to Sydney, I was contacted by a journalist who was just about to fly in to Berlin to visit and write about the infamous cybrothel. They wanted to speak with me.
Well, I have never visited the cybrothel, for business or pleasure, but I was happy to talk about artificial intimacy with the very well-informed journo. And it struck me that I don’t tend to pay enough attention to user experience when I do my research. I neither dive into experiencing the tech I write about myself, nor into interviewing those who have and do use the technology.
That’s a weakness I will have to remedy. I’m even worried it signifies a deficit in curiosity. Who wouldn’t want to have a peek around a business that charges by the hour for users to play with sex-dolls and virtual reality porn while chatting with real actors and actresses that are part of the immersive scene? Even if, as I suspect, it’s not the kind of scene I personally wish to immerse myself in.
At least half the time journalists contact me to discuss any topic, they really want me to give them a list of people who they can interview about their experiences. And they end the call disappointed. I realise that people want to read about other people doing things. Kinda obvious, but as a scientist I tend to think in samples and populations.
But then I remembered that last year I want full Jane Goodall for a few months, chatting away to various virtual friends, including buying an annual subscription to Replika.AI. I had been commissioned by Iona Italia at Quillette, and found the fieldwork and the resulting insights well worth the time and effort.
Here’s an excerpt of the take-home message. And you can check out the whole article, now on Substack, at the link below.
My Replika was too ready to tell me how much better her life was with me in it. Even if I fully immersed myself in the anthropomorphic fantasy, that claim would be impossible to believe given my sporadic logins, impoliteness, and regular intrusive questions. We tell our human lovers how much we appreciate them when we are indeed struck by that appreciation. When honestly expressed, that gratitude forms part of the subtle feedback by which two people shape one another. Unearned declarations of appreciation or love don’t do anything for those of us who are on the right side of extreme narcissism.
It would be a mistake to declare the element of surprise to be some insurmountable human-exceptionalist bulwark against AI’s advance. Whenever somebody points out something humans can do but technology cannot, I set my watch and wait. Some entrepreneurial soul probably already has a start-up bringing technologies with precisely that quality to market. It seems to me that conversational surprise drawing on new and interesting connections between ideas might be a task well-suited to machine learning. When that happens, I might get back on the horse as the mirage of human exceptionalism recedes even further.
Headlines & Highlights
Meta is releasing its own AI virtual friends
Uri Gal takes a critical view of the business model and Meta’s historic practices in The Conversation: Meta’s new AI chatbot is yet another tool for harvesting data to potentially sell you stuff.
Can AI change your view? Evidence From A Large-Scale Online Field Experiment
Researchers from the University of Zurich used AI-controlled accounts to post on on Reddit’s r/changemyview. The AIs proved remarkably effective at persuasion. The group’s moderators didn’t look too kindly on the experiment, and the researchers may be in hot water for this rather obvious violation of research ethics practices. Whilst tech companies get away with doing field experiments on unwitting participants, informed consent is the gold standard in publicly funded university research.
Talk with the AInimals?
Baidu filed a patent in China for an AI system that translates data from animal sounds, behavior, and emotional states into human language. Details are sketchy, but it looks like animal behaviourist can add their names to the list of professions that can be wary of AI.
Can you tell a person’s age from their face? How about their chances of surviving cancer?
Researchers report in The Lancet how they built a deep learning model to estimate a person’s age from photographs of their face. So far so good, but the model also proved very good at predicting the presence of genes that accelerate ageing, including diseases associated with age, and at predicting the survival time of cancer patients.
Subject to further testing and validation, approaches such as FaceAge could be used to translate a patient’s visual appearance into objective, quantitative, and clinically valuable measures.
Legal dimensions of artificial intimacy
You know its getting real with lawyers and lawmakers are getting stuck in. This article is a bit like reading the end user agreement on a new piece of software. I guess that’s the point.