The Issue

AI is advancing rapidly, and policy isn’t ready for where it’s going.

AI Progress

AI is advancing rapidly.

Recent progress in AI has led to systems more advanced than almost anyone was expecting. New techniques for training AI systems to reason have impressed both AI industry insiders and AI skeptics alike. These techniques have enabled AI models to excel at both abstract and concrete reasoning tasks, such as those involved in solving STEM problems. The best AI models now exceed human experts at answering PhD-level science questions. Perhaps most striking is how these systems are now actively helping researchers accelerate their work, analyze problems, and explore solutions – tasks that would have been far out of reach for AI systems even just a few years ago.

Source: Epoch AI (2025)

These same improvements have also enabled much more proficient “AI agents” that operate autonomously. These AI agents don’t simply respond to a question or statement, but instead, once given a goal, break down the goal into sub-tasks and pursue it in the world, adjusting course when necessary. For instance, cutting-edge AI systems can now compile a research report when given a simple prompt, which involves searching the web, analyzing sources, and writing the report. The field of AI R&D itself is even being accelerated by new AI capabilities. Recent research indicates that AI agents can outcompete human experts at complex AI R&D challenges when both are given a 2-hour time window.

As impressive as these capabilities are, we have every reason to expect them to continue to get better, and fast. Just 6 years ago, GPT-2 was among the most fluent and coherent AI systems. It struggled to count to ten and wrote text that was borderline incoherent. Two years ago, GPT-4 aced law and medical school entrance exams and acted as autocomplete for computer code, empowering software engineers to program more effectively. These capabilities were a huge step up from the incoherence of GPT-2, but still a far cry away from the reasoning and research abilities of AI systems today.

What will another 2 years, or 6 years, mean for AI? Many leading AI experts expect that will be enough time to reach artificial general intelligence (AGI).

AGI Looming

Many AI experts expect AGI in the next few years.

Artificial general intelligence (AGI) refers to AI systems that can match or exceed human intelligence across almost all cognitive domains. AGI has been a long-term goal of the AI field since its inception. For most of that time, AGI was thought to be many decades or centuries away. Yet today, many AI pioneers expect AGI will occur within the next few years.

Recent quotes from leaders of frontier AI companies indicate almost all expect AGI in the next few years:

  • Dario Amodei (Anthropic CEO): “At some point we’re going to get to AI systems that are better than almost all humans at almost all tasks; the term I’ve used for it is ‘a country of geniuses in a data center.’… That’s the thing that I think we’re quite likely to get in the next two or three years.”
  • Demis Hassabis (Google DeepMind CEO): “We’ve had a consistent view about AGI being a system that’s capable of exhibiting all the cognitive capabilities humans can… I would say probably like three to five years away.”
  • Elon Musk (xAI CEO): “We’ll have AGI… it’s just a question of when. One could debate, ‘Is it smarter than any human at the end of next year, or is it two years or three years?’ But it’s not more than five years.”
  • Sam Altman (OpenAI CEO): “I think in five years… people are like, ‘Man, the AGI moment came and went.’”

And it’s not just company CEOs who are making these statements, either. Leading academic researchers in the field are also saying that AGI might be only a few years away:

  • Yoshua Bengio (Turing Award winner and the most cited AI researcher in the world): “Previously thought to be decades or even centuries away, I and other leading AI scientists now believe human-level AI could be developed within the next two decades, and possibly within the next few years.”

While we can’t be confident either way, these predictions from the AI community should be taken seriously. If AGI will be developed in the near future, it demands proactive attention now.

AGI Implications

AGI is a different ballgame.

Current AI systems offer many benefits and opportunities, including productivity-enhancing tools and cancer-screening innovations. But AGI would have implications far beyond these. AGI would present the opportunity to turbocharge scientific and medical progress, acting as “automated virtual scientists” that can outmatch even the best human researchers. Likewise, many jobs that can be performed from behind a computer would become easily automatable, requiring society to navigate the effects on workers.

AGI would further have large implications for national security and the balance of power. If an adversarial nation beats the U.S. to AGI, they could potentially use the power it would provide – in technological advancement, economic activity, and geopolitical strategy – to reshape the world order against U.S. interests. Meanwhile, if unconstrained AI systems on the path to AGI proliferate to rogue states or non-state actors, these actors could weaponize these systems, carrying out automated cyberattacks at scale or conducting bioattacks or other CBRN attacks.

Further, AGI itself presents risks of humanity losing control of the future entirely. As AI systems have become more intelligent, they’ve demonstrated the potential to engage in adversarial and deceptive behaviors in order to achieve goals that don’t align with the interests of users or even their own designers. In one test, GPT-4 pretended to be a human in order to trick a freelance worker into completing a CAPTCHA for it. In another study, AI systems prompted to strongly pursue a goal would sometimes attempt to disable oversight mechanisms when those mechanisms got in the way – and then lie about what they’d done when questioned. Other examples of AI systems engaging in adversarial behavior include attempting to copy its own code onto a new server in order to avoid being shut down, “faking” alignment with an overseer, and concealing undesirable behaviors rather than correcting them.

With current AI systems, the above adversarial behaviors hardly cause major problems. But AGI would change that picture. If autonomous AGI systems that can outthink humans similarly act adversarially – disabling “inconvenient” oversight mechanisms and off-switches, strategically “playing nice” before eventually going rogue – then they’d introduce a new type of threat actor, one that could ultimately become uncontrollable.

As we approach AGI, we need policies that protect America’s AI lead, keep Americans safe, and ensure humanity stays in control of AI.