AI Policy

Dreadnode’s Policy Recommendations for the U.S. AI Action Plan

March 26, 2025
Daria Bahrami
SHARE

The Office of Science and Technology Policy (OSTP) recently invited the public to comment on the national Artificial Intelligence (AI) Action Plan. Dreadnode submitted its recommendations on March 15, in response to this request for information (RFI). Today, we are pleased to share a copy of our AI policy recommendations, which advocate for the integration and advancement of AI tools to strengthen America’s national security apparatus. 

View Dreadnode’s AI Action Plan Recommendations [PDF]

Dreadnode’s mission is simple, yet its pursuit is anything but. AI red teaming and offensive AI capabilities are excellent tools that can proactively advance our national security posture and that of our allies. This forward-leaning approach is intuitive to many practitioners in the space; still, it doesn’t always translate into policy. In fact, security policy has historically sustained support by responding to predetermined threats, not forecasted ones.

As we continue to advance the state of offensive security, Dreadnode is advocating for a national policy framework that thoughtfully supports the following priorities: leveraging AI to protect America and attacking AI to find its limits.

Leveraging AI to protect America

Today’s cyber threat landscape is at an inflection point. From adversarial nations to sophisticated criminal enterprises, American cyberspace is under continuous attack. Dreadnode advocates for harnessing the power of AI to defend the nation. Our work emphasizes AI-enabled offensive security solutions that can anticipate and automate national defense measures–particularly in cyberspace. To achieve this, we recommend the following initiatives to be enforced at the policy level.

  • Facilitate the testing and adoption of Automated Vulnerability Discovery and Remediation (AVDR) solutions:
    • Test the winning Cyber Reasoning Systems (CRSs) from DARPA’s Artificial Intelligence Cyber Challenge (AIxCC) on designated software test beds; 
    • Operationalize CRSs that perform at or above designated performance thresholds; and
    • Incentivize the private sector to adopt AVDR solutions.
  • Strengthen U.S. AI-Enabled Military Capabilities:
    • Consolidate the efforts of the Chief Digital and Artificial Intelligence Office (CDAO), the Defense Digital Service (DDS) the Defense Innovation Unit (DIU), the Strategic Capabilities Office (SCO) without losing the inherent agility of DDS, DIU, and SCO;
    • Establish an AI Capabilities Council, integrating small businesses private AI labs, to identify real-world machine learning solutions in support of U.S. military forces; and
    • Maintain the TRAINs Taskforce to include or run in parallel with the National Science Foundation’s Planning Grants to Create Artificial Intelligence (AI)-Ready Test Beds for the sake of transparency in resource sharing and AI evaluations. 
  • Streamline bureaucracy through AI-enabled automations, such as code translation efforts for software migration and cross-platform development:
    • Establish a centralized federal effort focused on automated code translation, such as the Code Continuity Task Force. This would expand upon existing but disparate efforts to modernize legacy software programs;

Attacking AI to find its limits

Dreadnode believes that today’s AI systems barely scratch the surface of their potential. As systems evolve, exploring their full range of capabilities will require more advanced testing and evaluations. Our team understands that adversarial machine learning research and testing of AI systems is critical to exposing security gaps, identifying solutions, and optimizing AI integrations into our broader ecosystems. To accomplish this, we recommend rebranding the Artificial Intelligence Safety Institute (AISI) to NIST’S Exploration of AI Advancement (NEXA).

  • The Artificial Intelligence Safety Institute (AISI) renamed as NIST’S Exploration of AI Advancement (NEXA) would position NEXA to lead interagency AI model evaluation and performance metrics, strengthening America’s role as the global leader in scientific AI standards;
  • Under NEXA, partnerships between AISI and the DoD's CDAO would be formalized to identify and collaborate with test and evaluation (T&E) partners. 
  • The AISI Consortium with over 200 members, rebranded as the NEXA Consortium, would take on a Scientific Advisory Board, enabling more interagency involvement.
  • Establish an AI Red Teaming Hub within NEXA, wherein participants across federal agencies like CISA, the intelligence community, and private AI labs could streamline information sharing through classified threat intelligence sharing channels. 

AI has the power to reshape and advance offensive security capabilities. With the right policy framework in place, the U.S. government can empower the private sector to prioritize innovation, rigorous testing, and strategic investment in AI. Only then can we improve America’s national security posture, lead the global race for technological advancement, and leverage AI’s transformative capabilities to the nation’s strategic benefit.