AIPN Supports Strong AI Provisions in NDAA, Warns Against Blocking State AI Laws

Today, as Congress considers the Fiscal Year 2026 National Defense Authorization Act (NDAA), the AI Policy Network expressed its support for the strong AI provisions within and warned against the preemption of state AI laws.

“Superintelligent AI may soon shape international power as much as nuclear weapons… [Alignment] concerns should be addressed through a comprehensive investment in AI safety research that supports American innovation and effective testing and evaluation of AI systems” wrote AIPN President of Government Affairs Mark Beall. “Without these capabilities at the federal level it would be unconscionable to remove any state authority to address these concerns.”

The full text of the letter is available here and below.

Dear Chairman Wicker, Ranking Member Reed, Chairman Rogers and Ranking Member Smith,

I write to express our support for the Fiscal Year 2026 National Defense Authorization Act (NDAA) and to share our views on the key provisions crafted by the House and Senate to address the research, development, and deployment of trusted Artificial Intelligence (AI) technology for military requirements. 

The AI Policy Network (AIPN) builds bipartisan support for policies that help America prepare for AI systems of unprecedented scope and capability. AIPN would strongly oppose any attempt to include in the NDAA the preemption of state laws with regards to AI or the imposition of a moratorium without any federal replacement to address critical emerging AI frontier national security and safety concerns. AI capabilities are advancing exponentially and experts increasingly agree powerful AI, potentially including superintelligence could arrive within the next five years. At the same time, powerful American AI technology is proliferating to U.S. adversaries at an alarming rate. As a result, sophisticated American AI tools now risk enabling malign actors to develop WMD-grade cyber weapons, develop novel pathways to nuclear weapons and biological weapons, and deploy inexpensive lethal autonomous drones. Superintelligent AI may soon shape international power as much as nuclear weapons. 

Today’s AI systems are difficult to align and sometimes engage in adversarial actions. As AI systems advance to become more intelligent than humans, these adversarial behaviors could make them uncontrollable. These concerns should be addressed through a comprehensive investment in AI safety research that supports American innovation and effective testing and evaluation of AI systems. Without these capabilities at the federal level it would be unconscionable to remove any state authority to address these concerns. 

As you work to reconcile the House and Senate versions I also urge you to preserve the work you have done to protect America’s technological edge, promote American AI, and prepare for increasingly powerful AI, including superintelligence. 

From the House-passed H.R.3838, AIPN supports:

  • Sec. 235: Initiative on Studying Advanced AI, National Security, and Strategic Competition – This section assesses advanced AI risks including misalignment, recursive self-improvement, and deception capabilities. It monitors China’s AGI progress and requires crisis preparedness plans and provides Congressional oversight of AI development. 
  • Sec. 1515: Strategy to Defend Against Risks Posed by the Use of AI – This section develops defensive strategies against AI-enabled attacks including deepfakes, autonomous weapons, and automated cyber operations targeting critical infrastructure.
  • Sec. 1531: Artificial Intelligence and Machine Learning Security in the Department of Defense – This section requires the Secretary of Defense to develop and implement a Department-wide policy for cybersecurity and governance of AI and Machine Learning (ML) in national defense applications, addressing protection against security threats specific to AI and ML including model serialization attacks, model tampering, data leakage, adversarial prompt injection, model extraction, model jailbreaks, and supply chain attacks.
  • Sec. 1626: Artificial General Intelligence Steering Committee – Senior leadership analysis of AGI trajectory and adversary capabilities. Develops DoD adoption strategy with ethical guardrails and counter-AGI strategies.

From the Senate-passed S.2296, AIPN supports:

  • Sec. 1623: AI Model Assessment and Oversight – Directs the Chief Digital & Artificial Intelligence Office (CDAO) to develop a standardized evaluation and alignment-risk assessment framework for all DoD AI models.
  • Sec. 1626: Artificial General Intelligence Steering Committee – This section creates a high-level body of both DoD civilian and military leadership to assess AGI trajectories, implications, control strategies, and counter-AGI threats and report to Congress by January 31, 2027 on the DoD adoption strategy of AI with ethical guardrails and counter-AGI elements.
  • Sec. 6081: Guaranteeing Access and Innovation for National Artificial Intelligence Act of 2025 or the “GAIN AI Act of 2025” – This section establishes a “right of first refusal” for U.S. customers before the export of advanced AI chips. 

Thank you for your efforts to complete the NDAA with the strong AI provisions outlined in this letter and for your consideration of these views.