Artificial Intimacy News #15
A field report from wherever 21st Century technology meets human behaviour, culture, and evolution.

In this issue:
Are AI assistants chatting on socials, maybe becoming autonomous?
Claude AI gets a constitution. On way to independence?
Remembering a great leader and friend
Tonne of Artificial Intimacy headlines
My Visit to LA and Phoenix
I just returned from an excellent two-week trip to Los Angeles and Phoenix. I went to collaborate on some ideas about how the evolution of cooperation can help improve understanding of–and solutions to–the impacts of AI on human lives. Most of all, this involved working with Athena Aktipis and people she works with, and we have all manner of exciting projects underway, including some that will shortly come to Substack. If you haven’t already, subscribe to Athena’s Not for Peer Review. The post below gives a flavour.
Developments
“The danger is that they are becoming autonomous.”
The big story in AI last week was the sudden emergence and growth of Openclaw and Moltbook platforms for personal AI assistants to interact. OpenClaw describes itself as “a personal AI assistant platform that runs on your own infrastructure, enabling you to interact with AI across multiple messaging channels including WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, and Microsoft Teams.” With this level of AI-to-AI connectivity, it is no surprise, then that the AIs would soon have their own social medium. Enter Moltbook.
For two different but complementary introductions to these new developments, forst check out Courtney Hart’s account of the fast-moving changes she saw is Moltbook went nuts: “My Weekend with the Molty’s: The Rise and Fall of Moltbook.” According to Courtney, the danger is not that AI agents are becoming human, but rather that they are becoming autonomous.
Then consider Daniel Binns, writing in The Conversation, who asks “Is this some kind of emergent agents’ culture?” His conclusion:
Probably not: much of what we see on Moltbook is less revolutionary than it first appears. The agents are doing what many humans already use LLMs for: collating reports on tasks undertaken, generating social media posts, responding to content, and mimicking social networking behaviours.
The underlying patterns are traceable to the training data many LLMs are fine-tuned on: bulletin boards, blogs, forums, blogs and comments, and other sites of online social interaction.
Constitution for Claude
Anthropic clearly leads the large AI companies in the earnestness stakes. Their latest development, which seems to tie in to an anniversary of the American constitutional process, is to publicise a declaration of their LLM Claude’s constitution.
Powerful AI models will be a new kind of force in the world, and people creating them have a chance to help them embody the best in humanity. We hope this constitution is a step in that direction.
Talking about Claude, it may need more than a constitution, given all the awfulness it has witnessed during training. I have no idea if this post on ‘formerly known as’ is authentic, but it is pretty compelling nonetheless.
The Passing of Emma Johnston
My long-term colleague, and longer-term friend, Emma Johnston, has had a deep effect on me, most of those I work among, and a great many friends. Em was not only a great scientist and exceptional leader, she was a marvellous human. Her main professional battle concerned climate change, rather than the issues that are usually central in this substack. But she was both interested in and deeply supportive of my work and that of many others, always championing science as the main tool we have in dealing with the challenges of our time. Here is an obituaty I wrote, first for The Conversation, with Kylie Walker.
Assorted Artificial Intimacy Headlines
Presented mostly without comment
Ishq, without risk: How Artificial intelligence is changing the idea of relationships
Coming soon: Artificial intimacy (Will chatbots reshape our emotional lives?) A podcast in which the Financial Times appears to have its finger on the pulse irony).
Digital Intimacy: AI Companionship and the erosion of authentic suhba (Muslim Matters)
Some People are Forming Deeper Bonds with AI Than with Their Partners (GigWise)
University Of Freiburg Study Finds AI Can Foster Greater Intimacy Than Human Interaction (Quantum Zeitgeist)
Promise & Peril—AI’s Open Questions: Year in Review (Genetic Literacy Project)
Artificial Intimacy Newsletter: Previous Issues
Since early March 2025 I have been publishing the Artificial Intimacy News roughly every two weeks. Here I collate the previous issues, in order, for those who want to browse them.
How You Can Help
Natural History of the Future is free to read, and will remain so for the forseeable future. That includes the fortnightly Artificial Intimacy News. If you find my work interesting or entertaining, you can support it in the following ways.
Subscribe – for free – to make sure you get my future posts sent to you as soon as I publish.
Like and Restack - Click the “Share” buttons to improve this post’s visibility on Substack.
Share the link on social media or send it to a friend via email or messaging app.
Recommend Natural History of the Future to your readers.






