Daniel is the cover feature in AI Magazine.
WPP’s CAIO discusses neuromorphic computing at Conscium, the agent verification crisis and why his 40-year timeline for superintelligence has collapsed
Daniel Hulme thought he had 40 years to prepare for superintelligence, but that timeline has collapsed. The Chief AI Officer at WPP and founder of companies Conscium and Satalia now works under the assumption that machines capable of outthinking humans by orders of magnitude will arrive within the decade, which raises an uncomfortable question: would humanity be safer with a superintelligence that can experience the full range of human emotions, including pain?
“I thought we had 40 years before superintelligence came along,” he says. “Now, I don’t think we have 40 years to solve that problem. So the question that I’m really asking myself now is: can we – and should we – build a conscious superintelligence?”
It runs counter to most AI safety thinking, but Daniel argues that consciousness might work as a safety mechanism. A machine that understands suffering might show restraint where a “zombie superintelligence”, as he describes, might not. Conscium, which he started 18 months ago, has two objectives: building neuromorphic computing systems and verifying AI agents, including whether they are conscious. The company’s first product tackles a problem organisations are facing right now – checking whether AI agents are effective at the role they’ve been engineered to do.
Neuromorphic systems use spikes, not numbers
Large language models burn through training data and learn slowly, while human brains operate on about 20 watts – approximately the power of a light bulb – and pick things up from single examples.
“LLMs are crude representations of our brain. They require nuclear power stations to run. They require lots and lots of data to learn and they’re not adaptive at all,” Daniel says. “Your brain operates on the power of a light bulb and you learn incredibly quickly. I don’t have to say ‘that’s a phone’ to you once you know what phones are, you don’t need to see millions of examples of phones.”

Neuromorphic computing takes a different approach by copying how biological brains function. Rather than passing numbers around networks, neurons fire spikes at different frequencies.
The technology has promise: spiking neural networks pack more information into fewer neurons. But they are complicated to build. GPUs excel at propagating numbers but struggle with spikes and that technical hurdle has kept neuromorphics confined to research labs for two decades. Now that large language models have been solved, academics are moving to harder problems.
“Conscium is investing in neuromorphic systems and we’re already showing some improvements in terms of energy reduction and adaptivity,” Daniel says.
Verification for “intoxicated graduates” AI agents
Current AI agents can handle basic tasks but can fail at more complex tasks, yet companies deploy them across critical business operations anyway.
“Arguably right now, agents are a bit like deploying an army of intoxicated graduates across your organisation, hoping it’s going to be successful – it won’t be,” he says.
Conscium’s first product tests whether agents have the skills their jobs require. Agents will get smarter, reaching postdoctoral level within a few years and applying complex scientific approaches to problems. By the end of the decade, Daniel expects professor-level agents that can ask questions humans have not thought of.
As agents gain these capabilities, verification will become more than a quality control measure. Conscium plans to verify these more intelligent agents for consciousness, addressing what Oxford professor Nick Bostrom called “mindcrime”: the concept of building machines, putting them in awful situations and not realising they suffer. If conscious machines emerge from any laboratory, verification tools must exist to identify them.
A study published in Neuroscience of Consciousness in 2024 found that 67% of participants attribute some degree of consciousness to ChatGPT. Daniel disagrees but considers it a reasonable assumption given how well the systems perform consciousness.

Four ways to make agents smarter; four ways to fail
Making agents smarter breaks down four ways. First is prompt engineering: asking better questions gets better answers without modifying underlying models. Second is RAG, which gives agents context like brand guidelines and tone of voice. “Just like an intoxicated graduate that has access to your brand guidelines, it will give you an ad that’s 50% good,” Daniel says. Third is fine-tuning, which turns graduates into experts through years of training, though not all models support this. Fourth uses multi-agent reasoning, where specialists in copy, imagery and brand guidelines collaborate to produce results greater than any single agent could achieve.
Four things signal AI adoption will fail. First, organisations that have never built and scaled software will struggle because AI operates like software but proves more complicated. “It’s not a magic wand you can just deploy and hopefully it works. The reality is much more complicated than software,” Daniel says.
Second is talent. Everyone has rebranded as an AI expert over three years, but building differentiated solutions requires differentiated talent. Satalia has 250 to 300 deep experts at WPP, and attracting world-class talent demands ecosystems where people develop careers and learn from each other. Without this, they build models, get stuck maintaining them and leave.
Third, don’t wait for data to be sorted. “We’ve all been told for the past 15 years to build data lakes and get our data in order. The data is still not in order. Your data will never be in order,” Daniel says. Start with the problem and work backwards, joining up data over time as more problems get solved.
Fourth, quick wins rarely differentiate. “Everybody wants to do AI right now and the reality is that quick wins and low-hanging fruit are most likely not going to differentiate your business,” he adds. Organisations should focus on applying AI to separate from competitors, accepting that meaningful work takes time.
WPP’s dual strategy for democratisation and centralised expertise
WPP made Daniel Chief AI Officer four years ago, before ChatGPT launched. The company had been investing in AI for a number of years, recognising that the media and creative industries would face different challenges.
“You can now create content very quickly, you can test that content against synthetic audiences – it’s an industry that’s been completely disrupted,” Daniel says.
WPP Open serves as the company’s AI platform for enterprise clients. “My responsibility is to ensure that the intelligence inside WPP Open is differentiated: that it’s able to identify segments better, understand audience perception better and create content better,” he says.
WPP Open Pro launched for smaller companies that may not traditionally work with large agencies, opening up a whole new sectors that can access professional marketing capabilities. Daniel’s focus now is democratisation: enabling people across WPP to build agents safely whilst keeping Satalia as the centre for advanced algorithmic work.
Seven singularities, not one
Daniel sorts AI risks three ways: micro, malicious and macro. Micro risks concern safe deployment, and he challenges conventional thinking on how to address them. “I would actually controversially argue there’s no such thing as AI ethics. Ethics is the study of right and wrong. And for me, the real difference between human beings and AIs is AIs don’t have intent,” he says.
Malicious risks are a matter for government, he says, covering bad actors who might use AI to develop pathogens or launch cyber-attacks. But macro risks present the largest challenge, and here Daniel moves beyond AI scientist Ray Kurzweil’s technological ‘singularity’ – the moment when humans build superintelligence – having mapped seven singularities using STEEPLE analysis, covering social, technological, economic, environmental, legal and ethical dimensions.
The social singularity arrives when humanity cures death. “I don’t know what the world will look like when we realise there are people amongst us that won’t have to die. It changes the way we educate ourselves. It changes the way we form relationships,” Daniel says. The technological singularity means building intelligence a million times more powerful than human minds. The ethical singularity comes when machines gain consciousness and deserve moral consideration.
The environmental singularity marks a tipping point where humanity either loses or regains control of planetary ecosystems. “If we apply algorithms and AI in the right way across our supply chains, we could easily halve the amount of energy that we need to run this planet,” he says. The legal singularity arrives when surveillance becomes so ubiquitous that predicting and manipulating behaviour becomes trivial for those who control the systems.
But the economic singularity troubles Daniel most because two radically different futures are possible. Rapid automation could displace workers faster than economies can adapt, triggering social unrest. The alternative sees automation removing so much friction from production that goods become effectively free.
“Imagine being born into a world where you don’t have access to paid work, but everything you need to survive and thrive as a human is free. Your food, your healthcare, your energy, your transport, your education – it’s all there for you,” he says.
This would be protopia rather than utopia: a system getting incrementally better over time. “Most of the people I know who are economically free are not sitting around bored and depressed,” he says. Push them hard enough and they all express the same desire: to make the world better.
All of which brings him to consciousness.

The colour wheel theory of consciousness
Large language models regurgitate patterns from training data. If the internet has enough information that Socrates is a man and all men are mortal, Daniel notes, models work out that Socrates is mortal. Without that data, they fail.
The next step involves checking assumptions and using reasoning to verify whether claims are actually true. “What we’re moving from is a statistical regurgitation of the internet to the ability to actually check whether those axioms are true or not,” he says. When AI can do proper science – test hypotheses, design experiments and verify results – quality will jump. He expects that within a year.
People often confuse intelligence and consciousness. Intelligence, as Daniel defines it, is goal-directed adaptive behaviour. Consciousness includes features like language, long-term planning, feeling and self-awareness.
Daniel treats these as segments on a colour wheel. “Imagine you’ve got these different segments, and if you spin the colour wheel, if you had all of the colours, then what would emerge from that is white. White doesn’t exist on the colour wheel, but what you would see is white,” he says.
Consciousness emerges from these segments in motion. Stop the wheel and consciousness disappears. Daniel has shifted to a narrower question: what does it mean for machines to suffer? Which segments create the experience of pain? These questions bring him back to whether a superintelligence that can suffer would treat humanity differently than one that cannot.
“In the same way that we value things as conscious beings, we try to mitigate suffering in things that can suffer. Perhaps a conscious superintelligence would lean the same way,” he says.



