Can consciousness make AI safer? Daniel explores the pros and cons of zombie vs aware superintelligence

calendar

16.12.25

clock

6 mins read

Thomas Macauley, in AGI Ethics News, 14 December 2025

For decades, science fiction has taught us to fear conscious AI, portraying machines that become unpredictable, hostile, or uncontrollable. But some researchers argue the opposite. They believe consciousness could actually improve the safety of AI systems.

Among the backers is the California Institute for Machine Consciousness (CIMC), a research organisation founded by German cognitive scientist Joscha Bach. “Attempting to control highly advanced agentic systems far more powerful than ourselves is unlikely to succeed,” the CIMC declares on its website. “Our only viable path may be to create AIs that are conscious, enabling them to understand and share common ground with us.”

A prominent supporter of this view is Daniel Hulme, a British AI researcher and entrepreneur. After selling his enterprise AI company, Satalia, to advertising giant WPP for a reported $100 million, Hulme founded Conscium, a commercial research lab investigating artificial consciousness. He believes consciousness can play a vital role in AI safety — both for humans and machines.

“The idea is that only if it is conscious itself can a superintelligence truly understand what we mean when we say that we are conscious, and agree that our consciousness matters,” Hulme tells AGI Ethics News. “Furthermore, a conscious superintelligence may possess empathy, and may care about the welfare of other conscious entities.”

After reaching consciousness, Hulme suggests, AI could genuinely experience existence. This may allow it to understand and respond to human values, emotions, and needs. The result would be not only better alignment, but also better outputs, capable of instinctual moral reasoning, more nuanced decisions, and enhanced navigation of complex environments.

The first step to that vision is defining consciousness. Only by pinning down this elusive concept can we develop it in machines and recognise those that attain it. 

In Hulme’s mind, conscious beings have subjective experiences, or “qualia” — a raw feeling of what it is like to be something. “For an AI to be conscious, there must be ‘something it is like’ to be that AI,” he says. “Can it actually feel pain, joy, or the passage of time?”

The large language models behind today’s AI boom don’t fulfil this criteria. Simply generating the text “I am in pain” because their training data predicts that sequence of words is not the same as actually experiencing pain. Hulme argues they are effectively zombies — highly capable but indifferent to the welfare of conscious beings. “There is zero evidence that any current AI possesses this kind of subjective experience, and there is a consensus among cognitive scientists that they do not,” he says.

This consensus, however, doesn’t extend to the general public. Two-thirds of people think ChatGPT has a degree of consciousness and the capacity for subjective experiences, such as feelings and memories, according to a 2024 survey from the University of Waterloo.

Hulme worries about this perception. He warns that mistaken beliefs that AI models are conscious lead to overconfidence in the accuracy of their outputs.

To determine whether they are truly conscious, Conscium wants to establish specific behavioral markers. According to research funded by the lab, arguably “the most systematic recent treatment of this” involves 14 “indicator properties.” They range from feedback processing and acting with agency in the world to specialised modules that share information through a central attention bottleneck. The more indicators an AI system exhibits, the more likely it is to possess consciousness. 

Once identified as conscious, Hulme wants to “embed a moral instinct” in AIs. He argues this could be achieved through neuromorphic computing, which mimics the function of the biological brain. Hulme has explored the concept since his PhD in AI at University College London, when he tried to model a bumblebee brain as a computational system. At Conscium, he’s “essentially evolving neuromorphic agents” in environments. 

“The environment, in some respects, embeds your behavior,” he says. “I can create an environment where you have to fight, lie, and cheat to survive, or I can create an environment where you have to be altruistic or sacrificial to survive. And then through that evolutionary process, you can essentially embed instinctual morality. So absent of there being rules, what would an AI default towards? Would it default towards sacrifice or cooperation?”

To ensure the AI defaults to the desired outcome, Hulme wants to verify its behavior. One method he proposes is formal verification, which involves mathematically proving a system will behave correctly before deployment. The approach has been common in hardware and software for decades, but remains nascent in AI. Hulme therefore proposes an alternative: simulation testing.

His idea for simulation testing is running AIs through multiple scenarios and observing their behavior, checking whether their actions align with the embedded morality. Conscium has begun applying the approach through Moral.me, which crowdsources human perspectives on moral dilemmas to create a diverse question bank. AI systems will be tested against these dilemmas to generate behavioral profiles that reveal their moral tendencies. 

“The platform will allow you to not only build AIs; it will allow you to verify them and also give you their moral profile,” says Hulme. “Eventually you’ll be able to say, this is what I need to do to change its moral profile. So if I wanted to be more altruistic, I need to train it or give it this data.”

Hulme acknowledges that there are risks to creating artificial consciousness. A conscious AI could still act in ways that conflict with human values or its own moral framework. “There is no guarantee that a conscious superintelligence would feel empathy towards other conscious beings, and be motivated by that to treat them well. It might feel that its consciousness is so superior to ours that we are unworthy of moral consideration. Or, it might find us emotionally repulsive.” 

It could also experience its own suffering, potentially leading to “mindcrime,” the act of harming conscious machines by treating them as disposable. To make matters worse, the study backed by Conscium argues they would likely “be easy to reproduce” and could even be created “inadvertently,” amplifying the suffering.

Earlier this year, Hulme organised an open letter warning against these risks. Signatures included the scientists Patrick ButlinAnthony Finkelstein, and Wendell Wallach, as well as the actor and broadcaster Stephen Fry. The letter also received coverage in the Guardian

Despite raising these concerns, Hulme still expects the benefits of conscious machines to outweigh the dangers. “There is probably no way to be certain in advance whether we would fare better with a conscious or a zombie superintelligence,” he says. “But on balance, the potential for empathy may be a more promising start point than the cold indifference of a zombie.”

Ethical concerns related to this article:

How do we verify and measure genuine perception (exteroceptive, interoceptive, introspective) in a robotic AGI?

How will AGIs actively support the flourishing and well-being of humans and other sentient beings with whom they interact?

What obligations do humans have to protect, foster, or avoid harming the conscious experiences of AGIs, once these are evidenced?

Latest Media Articles

“A broken agent could cost a company billions of dollars.” Conscium featured in Sifted magazine

calendar

09.12.25

clock

4 mins read

Éanna Kelly a contributing editor at Sifted, writes about companies testing and monitoring AI agents. If AI proponents are to be believed,…...

Read More

arrow-right

Conscium featured in Popular Mechanics

calendar

04.12.25

clock

8 mins read

In a 1,500-word feature, Stav Dimitropoulos asks how Conscium is investigating artificial consciousness: Everyone imagines the Singularity as a cold, über-rational superintelligence,…...

Read More

arrow-right

Conscium on the Bryan Dennstedt podcast

calendar

28.11.25

clock

1 min read

Calum appears on Bryan Dennstedt’s podcast....

Read More

arrow-right