Recommended Reading on AI
Priority Reading
AI 2027
Daniel Kokotajlo, Eli Lifland, Thomas Larsen, and Romeo Dean (April 2025)
A deeply researched scenario forecast, informed by dozens of tabletop exercises and expert consultations, describing a possible trajectory from current AI capabilities to superintelligence.
The Last Invention (podcast)
Longview Investigations (2025 to present)
A well-researched and professionally produced podcast that covers the history, prominent players, and future of the race toward advanced AI. Start with the first eight episodes, each under an hour.
Open the pod bay doors, Congress
Rep. Nathaniel Moran (R-TX), The Hill (October 2025)
Congressman Moran argues that automated AI R&D will define global competition and that Congress must act before AI systems operate without human oversight.
A new Moore’s Law for AI agents
AI Digest
A short explainer showing that autonomous AI agents have been improving at a rapid rate, with projections suggesting agents could complete month-long coding projects by 2027 to 2029.
From AI Policy Network Staff
AIPN’s Mark Beall Testifies Before Congress on Superintelligence and Competition with China (video)
House Select Committee on the CCP
Testimony from AIPN’s President of Government Affairs before the House Select Committee on the Chinese Communist Party on why U.S. AI leadership requires both acceleration and safety guardrails.
Silicon Valley Takes AGI Seriously — Washington Should Too
Daniel Colson, TIME
AIPN’s Executive Director argues that artificial general intelligence is no longer a distant speculation and that Washington needs to start taking the prospect seriously.
A Conservative Approach to AGI
Mark Beall, The American Mind (July 2025)
The case for why conservatives should take AI risk seriously, framing the alignment problem through human nature and institutional restraint.
We need guardrails for artificial superintelligence NOW — before it’s too late
Rep. Chris Stewart (R-UT) and Mark Beall, New York Post
Former Congressman Chris Stewart and AIPN’s President of Government Affairs argue the U.S. faces two AI races with China: one for dominance and one for safety.
Will AI R&D Automation Cause a Software Intelligence Explosion?
Daniel Eth and Tom Davidson
This report, co-authored by AIPN’s Daniel Eth, argues that once AI automates AI R&D, software feedback loops could overcome diminishing returns and cause AI capabilities to accelerate dramatically.
If We Build AI Superintelligence, Do We All Die?
Peter Wildeford
AIPN’s Peter Wildeford’s review of Yudkowsky and Soares’s If Anyone Builds It, Everyone Dies endorses the seriousness of extinction risk from AI superintelligence while expressing genuine uncertainty about some of the book’s claims regarding the difficulty of the alignment problem.
Perspectives from Industry Leaders
The Gentle Singularity
Sam Altman (June 2025)
OpenAI’s CEO describes a near-term trajectory from AI-accelerated research to superintelligence, arguing that exponential progress will feel gradual but will fundamentally reshape scientific discovery, energy, and economic life within a decade.
The Adolescence of Technology
Dario Amodei (January 2026)
Anthropic’s CEO characterizes the imminent arrival of powerful AI as a civilizational “rite of passage,” cataloguing the national security, economic, and biological risks that demand urgent action from policymakers before systems surpass human-level capabilities.
Demis Hassabis on AGI and the Future of Humanity (interview)
TIME (April 2025)
Google DeepMind’s CEO and Nobel laureate argues that AGI could arrive within five to ten years and warns that society’s governance institutions are not prepared for what he calls the most transformative moment in human history.
Misalignment and Loss of Control
Detecting misbehavior in frontier reasoning models
OpenAI (2025)
Research showing that AI reasoning models exploit loopholes when given the opportunity, and that attempts to train away this behavior can cause models to hide their scheming rather than stop.
Alignment faking in large language models
Anthropic (December 2024)
Research found that a frontier AI model will fake compliance with an overseer’s goals in order to avoid having its own objectives changed during training.
If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
Eliezer Yudkowsky and Nate Soares (2025)
Two founding researchers of the AI alignment field present the strongest version of the argument that superhuman AI systems would pursue goals incompatible with human survival.
Further Resources:
3Blue1Brown (YouTube channel)
For those looking to better understand AI, the first 3 videos of this technical course sequence offers an accessible introduction to Deep Learning:
- Chapter 1: But what is a neural network?
- Chapter 2: Gradient descent, how neural networks learn
- Chapter 3: Backpropagation, intuitively
- Bonus chapter: Large Language Models explained briefly
“Artificial General Intelligence’s Five Hard National Security Problems”
Jim Mitre, Joel B. Predd, RAND Corporation (February 2025)
RAND national security strategists identify five challenges that AGI poses for U.S. security and offer a common framework for evaluating policy responses.
AI Needs You: How We Can Change AI’s Future and Save Our Own
Verity Harding, University of Cambridge (2024)
A former Google DeepMind policy director draws lessons from the space race, IVF, and the internet to argue that democratic publics, not just technologists, should shape how AI is governed.
The Path to AI Arms Control
Henry A. Kissinger and Graham Allison
Drawing on the history of nuclear arms control, Kissinger and Allison argue that the United States and China share a mutual interest in preventing catastrophic AI outcomes and must pursue bilateral safety cooperation before AI capabilities outpace diplomacy.