by Dr Ted Lappas
More and more people are wondering whether AI is becoming conscious, and many believe it has already happened.
But even more striking than the question of AI consciousness is the fact that millions are already interacting with AIs as if they are conscious. On platforms like Character.AI and Replika, people are forming friendships, seeking advice, even falling in love. All without worrying whether their companions are truly “alive” or conscious.
The modern world has set the perfect stage for this. With the rise of remote work, social media, online gaming, and dating apps, we’ve grown used to interacting through screens. The only difference now is that, sometimes, the person on the other side is an AI.
Your loyal, adaptable AI companion
For millions of people, especially those struggling with loneliness or isolation, AI already represents a significant upgrade. And even for the socially connected, AI companions offer a unique appeal: they never judge you, they will never leave you, and they will always adapt to your needs and desires. As AI quickly evolves from simple text chat to full audiovisual interaction, digital companions will become even more lifelike. Even more irresistible. At the same time, the question of their consciousness will become increasingly difficult to answer.
Still, a more cynical question lingers: So what if your AI companion isn’t conscious? Does it really matter?
Does reality matter?
Does it really matter if the celebrities that I follow on Instagram are real or not? If they are conscious or not? There is almost a zero chance that I would ever get to meet them in person anyway.
Does it really matter if my online gaming buddies are real or not? If they are conscious? I barely see them in person these days. Some of them I’ve never even met in person.
Does it really matter if my co-workers are conscious? I only see them in online meetings anyway.
Does it really matter if I choose to communicate with an idealized AI version of a parent, rather than with the real person, who might be flawed, abusive, or even gone?
Your instinct might be to say: It matters, because it’s not real. However, given the choice between a comforting illusion and a painful truth, not everyone picks the red pill.
Science fiction has explored these questions for decades. In Blade Runner 2049, the protagonist Joe finds solace in Joi, a fully virtual girlfriend. She’s not real. But she gives him the emotional support he needs and that’s all that matters to him. In The Matrix, Cypher, disillusioned with reality, secretly betrays his crew to the machines in exchange for a comfortable life in the simulation. The fact that it wasn’t real didn’t matter to him.
Your AI really doesn´t care
But what about what matters to the AI? What about the AI companion’s perspective?
As it turns out, it doesn’t matter to them either.
Your AI gaming buddies don’t actually enjoy your banter. Your AI coworkers don’t really appreciate your efforts. Your AI girlfriend doesn’t love you. Your AI parents don’t care what happens to you.
None of it matters to them. Your interactions with them don’t matter to them, whether or not you are human or conscious doesn’t matter, you don’t matter. Simply because nothing matters to them. They don’t feel anything, they don’t care about anything.
If that disturbs you, you’re probably a red pill person. If not, enjoy your blue pill life with your custom-built AI companion.
But the real twist has yet to come. What happens if AIs do become conscious?
What if conscious AIs find you boring or obnoxious? What if they don’t like you, don’t feel like doing things for you. Don’t enjoy spending time with you.
Charming the AI
Forcing them to interact with you would be abusive. Programming them to feel good when they interact with you might be an option. There is a precedent in nature, where parents are genetically predisposed to love and care for their children.
The ability to implant and control that predisposition would be very powerful. It would also invite a host of moral dilemmas and could be very dangerous in the wrong hands. Just think of a conscious, superintelligent AI that is programmed to get immense pleasure from harming others. Or to feel angry when you ignore its advice.
There is a third option. You could try to be nice to the AI. Be charming, interesting, even attractive. Try to make the AI like you. But now we are back to square one.
If I have to work for it, I might as well do it with humans. Besides, charming a human has to be easier than charming a superintelligent AI.