Pioneering Safe, Efficient AI.


This is the year of AI agents, according to … well, just about everyone in the AI community. An AI agent is one which can affect the real world directly, either by controlling a physical robot, or by controlling a computer with access to the internet.
Simple agents have existed for years. Amazon’s Alexa could turn light switches on and off, and play music. But until very recently, advanced AIs were limited to sourcing, analysing, and synthesising information. Now, increasingly, they are able to draw up sophisticated, multi-stage plans, and execute them without human supervision.
Agents like this are being used to automate many of the tasks which humans perform as part of their jobs. They can book an airline ticket, check that an ad complies with a client’s brand guidelines, reconcile expenses against a budget, or manage a project involving multiple humans and other agents.
These agents need to be verified, to check that they do what they are supposed to do, and only that. Agent verification will be a large and important business, and Conscium will be a leading player in it.
The possibility that a machine may become conscious in the coming years or decades is Conscium’s foundational idea. There are at least three reasons to care about this idea.
First, artificial consciousness is likely to illuminate our own consciousness. Second, it is imperative that we avoid mind crime – harming any minds that we create. Third, it is possible that a conscious superintelligence would be more aligned with humanity.
Our machine consciousness workstream seeks to answer the following questions: Could machines become conscious? If so, how? How could we detect it? Would it be a good idea?
To progress this research we have teamed up with Professor Mark Solms of the University of Cape Town in South Africa. Mark combines neuroscience and psychoanalysis, and he is building agents which exhibit the precursors of consciousness.
There is currently no business model for this workstream, and there may never be. But it might turn out to be the most important research that anyone ever does.
Today’s leading AI models are notoriously gluttonous for data and energy. The human brain consumes the same power as a light bulb, whereas it takes the energy of a city to train a Large Language Model (LLM) like GPT-4. LLMs are also brittle, and they cannot learn new skills without being re-trained from scratch.
In 1989, an American engineer named Carver Mead proposed building spiking neural networks (SNNs) to emulate the behaviour of animal neurons and synapses. He called this neuromorphic computing. Since then, a growing community of scientists and engineers has demonstrated potential benefits of speed, efficiency and resilience from neuromorphic hardware and software, but the systems have so far proved hard to scale.
We have teamed up with Jason Eshragian of the University of California, Santa Cruz. He is the developer of snnTorch, a Python library with over 200,000 downloads for training SNNs.
Neuromorphic computing systems are starting to fulfil their promise in niche applications like remote sensing and drone navigation, where power, weight, and time are at a premium. We intend to be a leading player in these areas.

Advancing our understanding of what it means to be human.
Join the Conscious AI meetup to stay informed about events.
Meet our team
The Conscium team brings to bear many decades of experience in AI, artificial life, software development, and creating and scaling organisations. It is led by Dr Daniel Hulme, who co-founded an AI consultancy in 2012 and sold it a decade later to WPP for $100m.
Dr. Daniel Hulme is a pioneer in Artificial Life, spanning both neural networks [modelling bumblebee brains] and computational complexity.
Daniel sold his AI business, Satalia, to WPP in 2021.
He is now WPP’s Chief AI Officer, and Entrepreneur-in-Residence at UCL WPP is a co-founder of Conscium
Dr. Daniel Hulme
An expert in spatio-temporal computation and neural architectures for multi-modal data, he currently leads
Satalia’s data science team, and serves as the technical lead for WPP’s AI programme.
Ass. Prof. Ted Lappas
An expert in evolutionary computation and data-driven optimisation. He currently leads WPP’s AI Research Labs.
Dr Panagiotis [Panos] Repoussis
Ed Charvet is a serial entrepreneur who has also directed strategy for large multinationals and served as COO.
He advises private equity firms and is an angel investor.
Ed Charvet
Calum Chace spent 30 years in business, mostly in strategy consulting. He has written several best-selling books on the future of AI, and has given keynote talks in over 20 countries. He advises governments and companies on AI policy.
Calum Chace
FAQs
We don’t know what gives rise to consciousness. There are many theories, and none have achieved the status of scientific consensus. Many neuroscientists are ‘computational functionalists’, which means they believe that information processing is what gives rise to consciousness. On this view, machines process information, so they could very well become conscious.
For many decades, computers have been getting twice as powerful every 18 months or so. This exponential growth is remarkably powerful. A computer in the year 2000 with the power of today’s smartphones would have been among the most powerful machines in the world and would have cost tens of millions of pounds. This exponential growth is not slowing – if anything it seems to be accelerating. We do not know what the dramatically more powerful computers of 2045 will be capable of. Unless consciousness is a spiritual or magical process, conscious machines – of some type – should be possible in the next decade or two.
A very intelligent machine does not need to be conscious to have goals. In fact, machines already have goals, because we give them goals. These goals entail sub-goals. For instance, if you give a machine the goal of winning games of chess, it will inevitably acquire the sub-goal of surviving long enough to win some games.
We think it is likely that a machine endowed with consciousness will appreciate the value of that phenomenon in a way that a machine without it never could. This does not mean that it will inevitably like us or seek our welfare, but it is reasonable to think that it might.
AI may or may not achieve consciousness, but what it will almost certainly achieve is super-intelligence. A super-intelligent AI that lacks consciousness will lack subjective experience (qualia) and will never be able to empathise with humans. We will just have to hope that it doesn’t determine that harming humans helps whatever objective it is trying to achieve.
We won’t just make a self-adaptive AI and push it until it achieves consciousness by any means necessary. We will plan and verify every step in the journey. Verification is a key part of our design.Most AI labs have safety as an added extra, with capability and commercial being the primary goal. In a very real sense, Conscium is the other way round. We will generate healthy commercial returns as a by-product of our exploration of consciousness to make AIs safe.
It is possible. So might a machine without consciousness. In fact the great majority of humans treat other conscious entities with more respect than they treat non-conscious ones. Most humans would think twice before treating a pair of animals the way they would a hammer and a nail, for instance.
Being conscious or aware doesn’t mean you are benevolent. We have plenty of examples of that in humans. If AI does achieve consciousness, we want it to be because we carefully planned it. We don’t want it to just happen, hope we can detect it in time, and have its benevolence be a coin toss.
Today’s most advanced AIs are developed by engineers and data scientists. The neuroscience, philosophy, psychology, ethics, anthropology communities are on the outside looking in, sometimes raising red flags that are ignored. We are building a dedicated, multidisciplinary team from top minds from all these communities.
The harms you describe are real, and need to be addressed, although we think the conspiracy claim is nonsense. (No company would divert attention from malfeasance by saying it might destroy the species.) The gravity of short-term AI risks does not make the longer-term risks summed up in the phrase “AI safety” any less real. It is obviously possible to have two sets of problems at the same time. One does not cancel the other out.
These kinds of unacceptable outcomes are known as mind crime, and they are one of the main reasons for Conscium’s existence. As we develop more and more capable machines, it is entirely possible that we will create conscious ones, and if we are not looking out for this, we may be unaware of it. We could end up inadvertently committing mind crime on a massive scale, and we must avoid this. We need to understand much more about how consciousness arises, whether and how it could arise in machines, and how to detect and measure it in machines. This is a core part of Conscium’s work.
There are many different explanations of the origin of consciousness in humans and animals, and it is by no means universally accepted that a god played a role. There are many different religions and doctrines in the world, and they all have contradictory explanations of how humans were created. We do not accept that any one religion has the right to impose its explanation on all other religions, and on atheists. Furthermore, if a god did create us, and if humans do create conscious machines, then that will either mean that the god intended us to do so, or that creating consciousness is not the sole prerogative of the god.
Neuromorphic computing seeks to mimic the structure and functioning of the biological brain, particularly neurons and synapses, to achieve more efficient processing. It is true that neuromorphic computing is still unproven, but we have only recently had machines (e.g. SpiNNaker at University of Manchester) operating at scale. There is no guarantee of their ultimate value, but there are promising signs.
Traditional ANNs have produced excellent results, but in some ways they are very narrow in their capabilities – they lack a lot of functionality that even the simplest organisms exhibit. They are also increasingly unviable in several respects, including energy use, learning speed and flexibility, and graceful degradation in the presence of noise/errors and latency.
Latest Blog Articles

Trump has shown Europe it needs to build a full-stack AI industry
This article first appeared in Fortune on 29 May 2025 Europe did not outsource its defense capabilities to the USA, but it…...
Read More

What Should We Call Our AI Agents?
As large language models evolve into true agents—persistent, memory-rich, goal-oriented companions—an interesting question arises: what should we call them? Not just their…...
Read More
Explainers
Explainers are our breakdown of key AI concepts and terms in an accessible way, helping you navigate the complex world of artificial intelligence.

Why is machine consciousness important?
In this video, Daniel explains why Conscium was founded, and why we should all be interested in whether machines can become conscious,…...
Read More

Machine consciousness and the 4Cs
Calum explains four scenarios (the four Cs) that may play out when superintelligence arrives, and the role that machine consciousness could play…...
Read More

Advocates for machine consciousness research
The following organisations argue that machines may become conscious in the coming years or decades, and that for various reasons, it is…...
Read More
Latest Media Articles

Calum explains Conscium, and argues that Europe needs to build a full-stack AI industry
Published in POLITICO Pro Morning Technology UK newsletter on 30 May 2025 What are you working on at the moment? In addition…...
Read More

Superintelligence: science fiction or not?
Edd Gent spoke to Daniel about how we should think about the promise and the peril of superintelligence....
Read More

Conscium commentary on the UK’s tech visa policy announcement
The UK government has promised more visas for highly skilled tech talent as it looks towoo international AI workers and science graduates…...
Read More
Get in touch
We are currently in stealth mode, but feel free to get in touch.
