With AI Operating in Active Combat, Broad Coalition Urges Congress to Fund AI Reliability Research
“AI has moved from the backoffice to the battlefield at extraordinary speed” but is “not yet reliable”
Read exclusive coverage in Politico’s Morning Tech here.
Today, a broad coalition of leaders from the national security and defense sectors and the AI policy community called on Congress to authorize and fund $2 billion for a National AI Reliability and Control Initiative in the Fiscal Year 2027 National Defense Authorization Act. In a letter to the leaders of the Senate and House Armed Services and Defense Appropriations Subcommittees, the signatories, including two former directors of the Pentagon’s Joint AI Center, urged Congress to invest in the science that makes AI trustworthy.
The letter focuses on four priorities: (1) Human-AI Decision Integrity Research, (2) Continuous Assurance for Deployed Systems, (3) Next-Generation AI Control Research, and (4) LLM Governance and Independent Evaluation.
“If the United States is prepared to spend approximately $1 billion per day conducting combat operations with AI-enabled systems, it must be prepared to invest a fraction of that sum in the science that determines whether those systems perform as intended,” the signatories wrote. “The warfighters relying on these tools in combat today deserve nothing less.”
The letter was signed by:
- Mark B. Beall, Jr.
Founding Director of AI Strategy and Policy, U.S. Department of Defense Joint Artificial Intelligence Center
- John N.T. “Jack” Shanahan
Lieutenant General (retired), United States Air Force; Former Director, Algorithmic Warfare Cross-Functional Team (MAVEN) and the Joint Artificial Intelligence Center
- Michael S. Groen
Lieutenant General (retired), United States Marine Corps; Former Director, Joint Artificial Intelligence Center - Stuart Russell
Professor of Computer Science, University of California, Berkeley
- Brad R. Carson
President, Americans for Responsible Innovation
- Philip J. Reiner
Former Senior Director, National Security Council
- Michael Brown
Former Director, Defense Innovation Unit
- Morgan C. Plummer
Former Senior Advisor, Office of the Secretary of Defense
- Brendan Steinhauser
CEO, Alliance for Secure AI
- Professor Max Tegmark
Co-founder, Future of Life Institute - Sam Hammond
Chief Economist, Foundation for American Innovation - James Fickel
Founder, Starbloom Capital and the Amaranth Foundation - Adam Gleave
Founder and CEO, FAR.AI - Judd Rosenblatt
CEO, Agency Enterprise Studio
The full text of the letter is available here and below.
Dear Chairman Wicker, Ranking Member Reed, Chairman Rogers, and Ranking Member Smith,
The military campaign against Iran has made one fact unmistakable: artificial intelligence is no longer a future capability for the Joint Force. It is a wartime system in active use today, processing intelligence, prioritizing targets, and compressing decision timelines. That reality demands a commensurate investment in the science that makes AI trustworthy enough for the missions commanders are already asking it to conduct. We urge Congress to authorize and fund $2 billion over the Future Years Defense Program (FYDP) for a National AI Reliability and Control Initiative (NAIRCI) in the Fiscal Year 2027 National Defense Authorization Act.
On the first day of Operation Epic Fury, U.S. Central Command (USCENTCOM) struck more than 1,000 targets in Iran using the most advanced AI-enabled targeting infrastructure ever deployed in a major combat operation. Admiral Brad Cooper, Commander of USCENTCOM, confirmed publicly that warfighters are leveraging a variety of advanced AI tools that turn processes that once took hours or days into seconds. The Maven Smart System, integrated with large language model (LLM) capabilities, is now an official Program of Record with a directive to expand adoption by September. AI has moved from the backoffice to the battlefield at extraordinary speed. However, these AI systems were built for commercial use. They are not yet reliable enough for many Joint Warfighting Functions without human oversight.
NAIRCI would focus on four mission-critical needs and accelerate the ability of commanders and operators to trust AI systems for warfighting. These areas are:
- Human-AI Decision Integrity Research. Fund the defense research enterprise to define what “appropriate levels of human judgment” requires in practice at operational tempo: what operators need to detect AI error, including mandatory decision intervals, calibrated confidence displays, and human readiness standards matched to system, domain, and mission context. Establish a Joint AI Readiness Center with dedicated AI red teams that stress-test operators against corrupted data and adversarial deception under realistic time pressure.
- Continuous Assurance for Deployed Systems. Fund runtime monitoring, adversarial stress testing, and continuous evaluation methods for AI systems operating at wartime speed. Current defense research investments addressing operational AI vulnerabilities are funded in millions against a projected $700 billion private AI infrastructure investment this year, orders of magnitude below what the technology’s wartime role demands.
- Next-Generation AI Control Research. Establish a dedicated defense research campaign to address interpretability, alignment and control systems, and containment challenges posed by increasingly autonomous and self-improving AI. The President’s AI Action Plan directed DARPA-led work on these areas; it remains unfunded at the necessary scale.
- LLM Governance and Independent Evaluation. Develop independent test and evaluation standards, mandatory pre- and post-deployment auditing for all LLMs in intelligence and targeting workflows, and operator education on generative AI failure modes. No comparable T&E framework exists for the LLMs now integrated into Maven and other operational platforms.
The Department has prioritized speed in AI adoption, but speed alone is not velocity. Velocity requires direction. Without the investments outlined above, rapid AI deployment will likely produce unforced errors that undermine strategic objectives, erode public confidence, cost innocent lives, and enable adversary propaganda. NAIRCI ensures the Department moves fast and in the right direction, turning speed into durable military advantage.
A $2 billion investment in NAIRCI over the FYDP represents one percent of the reported $200 billion supplemental request and brings funding back up to 2018 levels. If the United States is prepared to spend approximately $1 billion per day conducting combat operations with AI-enabled systems, we must also invest a fraction of that sum in the science that determines whether those systems perform as intended. The warfighters relying on these tools in combat today deserve nothing less.
We would appreciate the opportunity to discuss these recommendations with you and your staff. Thank you for your consideration of these views.