On the GAEA podcast, Daniel explains:
• Why machine consciousness isn’t science fiction – and why it matters for AI safety right now
• His “colour wheel” model of consciousness – and why it only exists in motion
• Why a zombie superintelligence with no concept of suffering could be more dangerous than a conscious one
• How intelligence emerges from simple systems – lessons from bumblebees with only one million neurons
• Why large language models are “intoxicated graduates” – capable but fundamentally flawed
• The seven singularities humanity faces and why the technological singularity is the biggest existential threat
• Why guardrails and rules will never be enough to control superintelligent AI
• How moral systems can be embedded into AI through evolutionary processes rather than programming rules
• The case for local edge computing and data sovereignty as an alternative to centralised cloud models
• Why the economic singularity could lead to unprecedented abundance – freeing humanity from economic constraints
• How AI-driven misinformation might paradoxically restore the value of human connection and critical thinking
• Why the next five years will see a Cambrian explosion of AI innovation – and what comes after



