In this episode of The Business of Thinking, Richard Reid is joined by AI expert, author, and futurist Calum Chace for a wide-ranging conversation on artificial intelligence, superintelligence, and what the future may hold for humanity.
Calum breaks down where AI is today, where it’s heading next, and why we may be sleepwalking into one of the biggest transitions our species has ever faced. From AI agents and automation to consciousness, work, and the possibility of a post-jobs economy, this episode challenges assumptions and asks uncomfortable but essential questions about power, responsibility, and preparedness.
Key Takeaways
Artificial intelligence is already transforming how work gets done, but AI agents capable of acting with partial autonomy will dramatically accelerate this change.
Superintelligence may arrive far sooner than most people expect, potentially within the lifetime of today’s workforce.
Automation could eliminate most human jobs, but that doesn’t necessarily mean the end of purpose, meaning, or motivation.
The biggest risk is not technological failure, but lack of planning, poor governance, and unequal distribution of wealth.
Humanity may only get one chance to influence how superintelligence treats us, making decisions about safety and values critically important now.
Episode Highlights
Calum explains the difference between narrow AI, AGI, and superintelligence, and why the transition could happen incredibly fast. The conversation explores AI agents, verification, and why unsupervised systems raise serious safety concerns. Richard and Calum discuss the economic singularity and what happens when machines can do all paid work. Calum shares an optimistic but realistic view of a future where humans focus on learning, creativity, and connection rather than jobs. The episode ends with a fascinating discussion on AI consciousness and whether making machines conscious could actually make them safer.
Timestamps
00:00 – Welcome to The Business of Thinking
01:05 – Calum’s background and early interest in AI
02:23 – The two big breakthroughs that changed AI
03:20 – Large language models and their limitations
04:34 – AI agents and partial autonomy
06:18 – Verification, supervision, and AI safety
07:43 – AGI, superintelligence, and the 2029 debate
09:53 – Fear, optimism, and extinction risk
11:52 – Automation and the end of human jobs
12:59 – Wealth distribution and global inequality
14:16 – Governments, politics, and lack of preparation
15:59 – Work, identity, and human motivation
18:42 – What businesses should focus on right now
21:52 – Common sense, world models, and timelines
22:49 – Becoming the second smartest species
25:10 – Conscious vs unconscious superintelligence
28:22 – Why we probably won’t stop AI development
30:33 – Current projects and future priorities
32:26 – Where to find Calum’s work



