Many people are tempted to believe that large language models (LLMs), such as ChatGPT, Gemini, and Mistral, might be conscious. This is understandable – they produce remarkably human-like text and conversations.
However, the vast majority of experts in fields like AI research, neuroscience, philosophy, and software engineering believe it’s extremely unlikely that today’s LLMs are conscious. They believe that LLMs process patterns in language without subjective experiences, emotions, or awareness. There is nothing “it is like to be” for an LLM.
It’s certainly possible that future AI systems could achieve forms of consciousness. Conscium was founded partly to address the important safety questions such developments would raise – both for humans and machines.
For now, though, LLMs don’t have the complex biological or computational structures that seem to be essential for conscious experience. Nor do they consistently display behaviors — like self-awareness, intentional understanding, or unified perception — that we associate with being conscious.

Are today’s LLMs conscious?
Explainers

Why is machine consciousness important?
In this video, Daniel explains why Conscium was founded, and why we should all be interested in whether machines can become conscious,…...
Read More

Machine consciousness and the 4Cs
Calum explains four scenarios (the four Cs) that may play out when superintelligence arrives, and the role that machine consciousness could play…...
Read More

Advocates for machine consciousness research
The following organisations argue that machines may become conscious in the coming years or decades, and that for various reasons, it is…...
Read More