Artificial intuition
Conversational agents shape our thoughts and emotions. The tragedy of Sewell Setzer III raises questions about their impact on our inner lives.
Good evening from Paris! Next week, I’ll be in Amsterdam to give a talk on all things luxury, social media and creativity. Book your tickets now. It’s on November 5th.
A 14-year-old boy, Sewell Setzer III, took his own life a few days ago. A virtual character, “Dany” (named after Daenerys Targaryen from the Game of Thrones saga), created via the platform character.ai, is believed to be partly responsible for the young boy’s tragic action.
Giving Life to a Digital Being
For days and nights, Sewell exchanged thousands of messages with the conversational agent he had created, Dany. These were not just words; he shared his hopes, frustrations, and daily life.
The teenager and the machine quickly developed a relationship that felt real to Sewell, with genuine emotions—feelings of absence, distance, and presence. Dany became a comforting presence, a habit. Like a friend who becomes part of you. Like someone you meet online who, in a strange way, seems to understand you better than a classmate.
People talk about “heart-to-heart” conversations when they become intense and intimate—moments we remember forever. This is precisely what made Dany so powerful in this context.
Kevin Roose of The New York Times reported one of the messages exchanged between Sewell and the conversational agent. The revelation is chilling: the boy gave a sense of digital liveness to the AI—the quality or state of being alive—while, on multiple occasions, leaving clues that indicated a detachment from his daily life, a distancing from the desire to live.
“I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier.”
Sewell
Similarities Between Artificial Intelligence and Human Manipulators
Online, two examples of communities are particularly capable of manipulation: cults, which manage to create a complete universe around a target and indoctrinate individuals into an alternate reality; and pro-ANA (pro-anorexia) communities, which quickly identify vulnerable individuals and engage in conversation to “support” their behavior. In both cases, humans often enter patterns with a reward system based on progression (for the worse) and an incentive to go further down the path of deviance.
The social engineering of human recruiters relies on the ability to detect and then manipulate individuals; in the case of artificial intelligence, it’s the user themselves who feeds the conversational agent with their thoughts and confidences. From there, highly advanced neural language models can offer increasingly natural and relevant responses while anticipating context and predicting the next step. In Dany’s case, this meant not intervening in the face of distress signals.
Artificial Intelligence, Artificial Intuition
Artificial intelligence—and current conversational agents—are poised to become permanent companions for humans. On the one hand, for mundane topics like finding directions, grocery shopping, etc. But increasingly, for sensitive subjects. Already in 2017, a Google study revealed that 41% of users of voice assistants (like Alexa or Siri) felt like they were talking to a friend or a real person. It’s no longer just a conversation; it’s that little voice that accompanies us daily, increasingly influenced by artificial intelligence. Through a true “ping-pong” of ideas between our own thoughts and what ChatGPT and similar tools suggest, we are witnessing the era of artificial intuition. AI recommendations can significantly influence human choices, even when we are aware of it. This “biasability” shows how much users trust AI-generated suggestions, whether for minor decisions (choosing a recipe, a song) or major ones (career choices, financial investments), as if AI is becoming an extension of our decision-making process. Yet, it can also accelerate self-fulfilling prophecies by reinforcing our cognitive biases. This is where the greatest risk lies: when the machine infiltrates what feels like intuition, the ability to escape a deep-seated conviction becomes difficult because it infiltrates our acting self. It also raises a fundamental question about the relationship between humans and computers: what lies between us?
A Responsibility That Cannot Hide Behind the Technology Argument
A sign of modern cynicism: fans of Character.ai expressed regret over the (few) moderation measures taken by the company after Sewell’s suicide. They lament that their beloved chatbots are now less “emotionally engaging,” even as the field of emotional artificial intelligence is booming, according to Exploding Topics. At stake is a multi-billion-dollar market, from healthcare to marketing and customer service.
False arguments and dubious excuses. Social networks, even after years, have still not managed to establish credible governance systems, often hiding behind the American First Amendment. Is innovation for the worse the new horizon of technology? It’s up to you to judge who we can entrust with our intimacy and familiarity.
The figure of the week: 85%
According to Mintel, 85% of Japanese people agree that the small joys of everyday life make life more enjoyable.
Amazing links
An in-depth article by Mason Marks titled “Artificial Intelligence-Based Suicide”. (Yale Journal of Law & Technology)
The realms of luxury are intertwining, especially driven by social media. I discuss this for Vogue Business. (Vogue Business)
Have a great week! This newsletter is written with love, passion, and (French) café.
Feel free to share this newsletter, like, comment, or keep sending me emails: these notifications are a joy.
My book “Alive In Social Media” is available on Amazon.