The study of consciousness used to be career-limiting for serious scientists. Fortunately, that is no longer the case, thanks to improvements in brain imaging, changing philosophical climates, and the decline of behaviourism.
Following are some of the scientists not directly involved with Conscium who are exploring the issues of whether machines can become conscious, whether they should, and how it could happen. They are listed alphabetically by surname.
Jonathan Birch is a professor of philosophy at the London School of Economics. He is probably best known for being a co-author of The New York Declaration on Animal Consciousness, along with Kristin Andrews of York University in Ontario, and Jeff Sebo of New York University. It was launched in April 2024, and as of February 2025 it has 550 signatures. Birch’s second book, “The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI” was released in July 2024, and discusses the possibility of AIs becoming conscious.
David Chalmers is an Australian philosopher and cognitive scientist best known for coining the phrase, the “hard problem of consciousness” in his 1995 paper “Facing Up to the Problem of Consciousness”. The easy problem is to provide mechanistic explanations of how cognitive functions such as memory, perception, and behaviour work, while the hard problem is to explain how subjective experience—what it feels like to be conscious—arises from these physical processes. His early work with Douglas Hofstadter equipped Chalmers with an understanding of both computational and philosophical aspects of consciousness. He argues that sufficiently advanced AI, with the right architecture, could have subjective experiences, but that current AI systems are far from achieving that. However he also believes that AIs could be philosophical zombies – behaving exactly like conscious beings, but lacking subjective awareness. He warns against making premature assumptions, and calls for rigorous testing and ethical considerations regarding AI sentience. His latest book, “Reality+” suggests that virtual realities could reframe our understanding of consciousness and personal identity, arguing that experiences in virtual worlds can be just as meaningful as those in the physical world.
Michele Farisco, Kathinka Evers, and Jean-Pierre Changeux co-authored the paper “Is artificial consciousness achievable? Lessons from the human brain” in December 2024. It argued, among other things, that machine consciousness could be qualitatively different from human consciousness, and that it could be either more or less sophisticated.
Steve Fleming is a Professor of Cognitive Neuroscience at University College London (UCL), where he leads pioneering research into human consciousness, metacognition, and their implications for AI. His work bridges neuroscience, psychology, and computational modelling to explore how self-awareness and subjective experience emerge in biological and artificial systems. His work on metacognition, the brain’s ability to monitor and reflect on its own thought processes, has provided insights into how humans become aware of their own knowledge, errors, and decision-making strategies. He directs UCL’s Metacognition Group, which studies the neural and computational underpinnings of self-awareness using a combination of behavioral experiments, functional neuroimaging (fMRI), and machine learning.
Tom McClelland is a philosopher at Clare College, University of Cambridge, and an Associate Fellow of the Leverhulme Centre for the Future of Intelligence. His research spans the philosophy of cognitive science, metaphysics, aesthetics, and applied ethics. In papers like “Agnosticism About Artificial Consciousness” and “Will AI Ever Be Conscious?” he contends that we lack the evidence to know whether artificial systems can possess conscious experiences, but that we should be proactive in trying to develop strategies to navigate the potential consequences of AI consciousness. He has studied the concept of affordances for mental action, proposing that individuals perceive their environment not only through physical action, but also through mental action, such as attention, imagination, and deliberation.
Kenichirō “Ken” Mogi is a senior researcher at Sony Computer Science Laboratories and a visiting professor at the Tokyo Institute of Technology. His May 2024 paper “Artificial intelligence, human cognition, and conscious supremacy” proposes conscious supremacy as a concept analogous to quantum supremacy, meaning that certain computations could be radically easier to carry out by conscious systems than by non-conscious ones. It also discusses the relevance of consciousness to AI alignment.
Henry Shevlin is the Associate Director of the Leverhulme Centre for the Future of Intelligence (CFI) at the University of Cambridge, where he also co-directs the Kinds of Intelligence program and oversees educational initiatives. His research addresses the potential for machines to possess consciousness, the ethical ramifications of such developments, and the broader implications for our understanding of intelligence. In his 2024 paper, “Consciousness, Machines, and Moral Status,” he examines the recent rapid advancements in machine learning and the questions they raise about machine consciousness and moral status. He suggests that public attitudes towards artificial consciousness may change swiftly, as human-AI interactions become increasingly complex and intimate. He also warns that our tendency to anthropomorphise may lead to misplaced trust in and emotional attachment to AIs.
Reddit has, of course, at least one page dedicated to discussion of artificial consciousness: https://www.reddit.com/r/ArtificialSentience/