AI Policy

From Compute to Congress: Decoding AI Policy

May 15, 2025
Daria Bahrami
SHARE

I first met Will Pearce and Nick Landers when I sat across from them at a group dinner over a year ago. They shared the story with me about how they founded Dreadnode, an offensive AI security company. It instantly grabbed my attention. Offensive security is rarely spoken of outside of a classified environment, and in the cyber domain it’s especially hush-hush. However, offensive cyber capabilities are highly relevant to strengthening any national security effort because that’s how we evaluate environments, ‘stress test’ large language models (LLMs), discover and remediate vulnerabilities, automate security solutions, and so much more.

It was during this and subsequent conversations that we identified a significant gap in the national security space: while Americans have historically been sheltered from kinetic warfare, the U.S. government can’t shield the people from cyber warfare in the same way. Civilians have had a front-row seat to the impacts of cyberattacks—from vulnerability exploitations to data exfiltrations to ransomware attacks. With each attack, those affected inevitably have more questions and concerns about what steps are being taken to secure our world. And while several communities are hard at work to resolve these pain points in cyberspace, the messaging that surrounds these efforts is not compelling.  

Not many tech startups lay claim to a policy shop, and I have to wonder if that’s because it’s extremely difficult to establish a common language between policymakers and security engineers. Still, Dreadnode cares about thoughtful and effective policy because AI-enabled cybersecurity is critical to strengthening our national security posture. Establishing norms in this space requires an understanding of what's possible. We are up to the challenge.

Welcome to “From Compute to Congress: Decoding AI Policy,” a blog series where we will be exploring and developing the AI-enabled cybersecurity playbook from the lens of offensive AI security engineers and researchers. 

The concept for this series has been circling in my mind for a while, but came to fruition after conversations held at RSA in San Francisco last month. There, I noticed first-hand a divide between industry and government (more on that below). The intent of this content is to bridge that gap by facilitating a better understanding of AI-enabled cyber capabilities and highlighting how policy can support, or even hinder, the optimization of this technology. 

Here’s what you can expect to read about, across three orders of relevance in the policy space: 

  • Developing AI-enabled cybersecurity 
    • Cybersecurity policy and strategy has been crystallizing over time, and we’re still learning. Artificial intelligence can automate a lot of the manual cyber labor that is nearly impossible to maintain, let alone scale. Identifying opportunities to automate this work will require careful federal policy language, strategic investments in small businesses, effective and laser-focused public-private partnerships, and research and development (R&D) funding across key technical areas.  
  • Strengthening America’s national security posture
    • The national security toolkit is typically measured through the DIME(FIL) model—DIME is the foundational framework of core tenets of national power: Diplomatic, Information, Military, and Economic, with the FIL expansion representing Finance, Intelligence, and Law Enforcement.
    • AI security, to include offensive machine learning, feeds into and strengthens these pillars of power. The connective tissue typically includes: rules of engagement in hybrid kinetic warfare that touches both the public and private sector; the value of information exchange and centralized resources when mapping out the threat landscape; and America’s opportunity to lead and shape the global AI race.
  • Supporting the ecosystem that in turn, supports the work we do
    • We can only advance AI so long as the surrounding infrastructure takes us there. Democratizing large language models and expanding this business requires a rescaling of the workforce, energy supplies with the capacity to power AI at scale, federal regulatory guidelines that can supersede the state-led path we’re on to patchwork policy, and cognizance of how other country’s current and future AI regulations will impact our economy. 

There are many tools at our disposal to enhance our security posture: traditional military tactics, cyber capabilities, space infrastructure, diplomatic or soft power, just to name a few. Artificial intelligence is quickly becoming a very prominent resource, and as cyber blurs the traditional norms and rules of engagement that are meant to guide our military, we must be prepared to understand and defend against the offensive capabilities of AI systems.

The push to accelerate offensive AI innovation

We are at a critical moment in history, where we are actively shaping AI implementations that can improve our cybersecurity posture and ultimately, bolster the U.S. national security toolkit. Dreadnode has been exploring perhaps one of the most sensitive and misunderstood use cases for AI: developing offensive cyber capabilities. 

We hold the view that the use of AI can and will accelerate national security. What's holding us back from seeing real progress is the lack of comprehensive cyber evaluations to encourage experimentation and development of offensive agents. We're on a mission to change that, and an understanding of this at the policy level is a vital piece of the puzzle.

Among the few occupied government seats at RSA, one talk in particular resonated with the offensive security community. Alexei Bulazel, the Senior Director for Cyber on the White House National Security Council, pointed out that offensive operations are a critical component of national security. In fact, one of the only ways to strengthen America’s cybersecurity posture is to acknowledge that contrary to prior recommendations, deterrence in cyberspace is not as achievable as policymakers projected.

“Not responding is escalatory in its own right,” he said. “There is a concern that offensive cyber could be escalatory, but if you continually let the adversary hack you and hack you, that in itself, that’s a norm. We need to find some way to communicate that this is not acceptable.” 

The use of AI in offense must be normalized and discussed at the policy level. The more we understand about this ecosystem, the better we can advocate for technology that works as intended and for government policy that supports it. To achieve this, first the gap between industry and government has to be addressed.

The divide between industry and government

Coming off the heels of a whirlwind week in San Francisco, I’ve been nursing what I’d call an RSA information hangover. This massive industry-focused conference at Moscone Center typically draws a crowd of approximately 40,000 attendees, so it can be… overwhelming. But RSA also yields a lot of strategic business insights and highlights the hottest cyber industry themes. The most prominent theme of this year’s conference was undoubtedly AI, which comes as no surprise. 

Given the rapid advancement of AI systems, it’s important to highlight the role cyber policy can play in supporting private sector innovation. Most policy talks—represented by the Federal Bureau of Investigations (FBI) and the Cybersecurity and Security Infrastructure Agency (CISA)—focused on the difficulties with information sharing, the ethics of whistleblowing and cyber legislation, and ongoing, thankless efforts to navigate attacks against critical infrastructure—yes, we’re still talking about Colonial Pipeline and SolarWinds. 

But the relationship between the policymaking and engineering communities is already riddled with tension, and the federal government occupied a much smaller presence at this year’s RSA conference, as previously acknowledged. If RSA was any indication of the state of public-private partnerships, we are observing a ‘rebalancing’ of government and industry powers. The more the U.S. government withdraws from industry-led security convenings, the more big tech can fill that vacuum. 

This isn’t necessarily good or bad, but it does stress the importance of acknowledging which voices take up the most space in these conversations. As big tech gets bigger, especially with the increase in acquisitions of AI startups, industry carries a lot of weight in policy discussions and that is a tremendous responsibility not to be taken lightly. 

The path forward: collaboration

There’s always room for improvement in strengthening our national security toolkit and acknowledging the balance between industry and government voices. At the same time, there are also existing efforts that have yielded results and deserve acknowledgment: the U.S. government has successfully been able to steward innovation and information sharing through their investments, partnerships, and even regulatory bills. 

  • Investments in research and development (R&D) efforts: The U.S. government continues to explore smart R&D funding opportunities, reports show that federal funding has enabled a lot of progress towards advancing AI, particularly through university-led research. As funding sources continue to shift, it’s important to recognize that continued investment in scientific R&D brings us that much closer to leading and shaping the global AI race
  • Federal funding of small businesses: Federal seed funding, by way of Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs, has enabled small businesses to take on competitors with deeper pockets. This research and development funding has fostered innovation across industry verticals, to include the detection of vulnerabilities in open-source software and in Operational Technology systems, among other AI-enabled security solutions.
  • The priority of laser-focused public-private partnerships: Nimble agencies such as the Defense Innovation Unit have demonstrated the ability to forge partnerships with industry, including Scale AI, in favor of commercial solutions that optimize U.S. defense operations. They’ve also put out solicitations for partnerships with the intent to minimize civilian harm. Larger examples of public-private partnerships include DARPA’s AI Cyber Challenger, which pulled in major stakeholders across the U.S. federal government, big tech, universities, and small businesses. 
  • Strategic regulations promote transparency: Like it or not, regulatory policy can be a fantastic forcing function for greater transparency. CISA’s Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA) regulation is an excellent example of this, as it would serve as a reporting requirement for companies that have fallen victim to a data breach and it has positioned the federal government to deploy resources, assist in remediating the threat, and warn other potential targets. Yes, CIRCIA has received loads of criticism and will undergo continued evaluation before it takes effect in 2026. But nobody likes imposed requirements. The point of regulation is to encourage information sharing: more specifically, to identify threat actors, malware variants, and attack models, to expose patterns in this space, and to defend ourselves based on this information. 

Just as R&D funding and partnerships with industry can spur innovation, government regulatory agencies also have the power to facilitate information sharing that will demystify the threat landscape. For better or for worse, regulatory policy is a tried-and-true forcing function to bring industry and policymakers to the proverbial table. Above all, offensive operations need to be a normalized point of discussion as we continue to formalize U.S. cyber policy because our national security depends on it.

Policy should drive innovation, not hinder it. We must work together and that includes you. Email me directly at daria@dreadnode.io if you want to hear more about Dreadnode's take on national security and AI policy—or with suggestions on what you'd like me to cover in this series.