I first met Will Pearce and Nick Landers when I sat across from them at a group dinner over a year ago. They shared the story with me about how they founded Dreadnode, an offensive AI security company. It instantly grabbed my attention. Offensive security is rarely spoken of outside of a classified environment, and in the cyber domain it’s especially hush-hush. However, offensive cyber capabilities are highly relevant to strengthening any national security effort because that’s how we evaluate environments, ‘stress test’ large language models (LLMs), discover and remediate vulnerabilities, automate security solutions, and so much more.
It was during this and subsequent conversations that we identified a significant gap in the national security space: while Americans have historically been sheltered from kinetic warfare, the U.S. government can’t shield the people from cyber warfare in the same way. Civilians have had a front-row seat to the impacts of cyberattacks—from vulnerability exploitations to data exfiltrations to ransomware attacks. With each attack, those affected inevitably have more questions and concerns about what steps are being taken to secure our world. And while several communities are hard at work to resolve these pain points in cyberspace, the messaging that surrounds these efforts is not compelling.
Not many tech startups lay claim to a policy shop, and I have to wonder if that’s because it’s extremely difficult to establish a common language between policymakers and security engineers. Still, Dreadnode cares about thoughtful and effective policy because AI-enabled cybersecurity is critical to strengthening our national security posture. Establishing norms in this space requires an understanding of what's possible. We are up to the challenge.
Welcome to “From Compute to Congress: Decoding AI Policy,” a blog series where we will be exploring and developing the AI-enabled cybersecurity playbook from the lens of offensive AI security engineers and researchers.
The concept for this series has been circling in my mind for a while, but came to fruition after conversations held at RSA in San Francisco last month. There, I noticed first-hand a divide between industry and government (more on that below). The intent of this content is to bridge that gap by facilitating a better understanding of AI-enabled cyber capabilities and highlighting how policy can support, or even hinder, the optimization of this technology.
Here’s what you can expect to read about, across three orders of relevance in the policy space:
There are many tools at our disposal to enhance our security posture: traditional military tactics, cyber capabilities, space infrastructure, diplomatic or soft power, just to name a few. Artificial intelligence is quickly becoming a very prominent resource, and as cyber blurs the traditional norms and rules of engagement that are meant to guide our military, we must be prepared to understand and defend against the offensive capabilities of AI systems.
We are at a critical moment in history, where we are actively shaping AI implementations that can improve our cybersecurity posture and ultimately, bolster the U.S. national security toolkit. Dreadnode has been exploring perhaps one of the most sensitive and misunderstood use cases for AI: developing offensive cyber capabilities.
We hold the view that the use of AI can and will accelerate national security. What's holding us back from seeing real progress is the lack of comprehensive cyber evaluations to encourage experimentation and development of offensive agents. We're on a mission to change that, and an understanding of this at the policy level is a vital piece of the puzzle.
Among the few occupied government seats at RSA, one talk in particular resonated with the offensive security community. Alexei Bulazel, the Senior Director for Cyber on the White House National Security Council, pointed out that offensive operations are a critical component of national security. In fact, one of the only ways to strengthen America’s cybersecurity posture is to acknowledge that contrary to prior recommendations, deterrence in cyberspace is not as achievable as policymakers projected.
“Not responding is escalatory in its own right,” he said. “There is a concern that offensive cyber could be escalatory, but if you continually let the adversary hack you and hack you, that in itself, that’s a norm. We need to find some way to communicate that this is not acceptable.”
The use of AI in offense must be normalized and discussed at the policy level. The more we understand about this ecosystem, the better we can advocate for technology that works as intended and for government policy that supports it. To achieve this, first the gap between industry and government has to be addressed.
Coming off the heels of a whirlwind week in San Francisco, I’ve been nursing what I’d call an RSA information hangover. This massive industry-focused conference at Moscone Center typically draws a crowd of approximately 40,000 attendees, so it can be… overwhelming. But RSA also yields a lot of strategic business insights and highlights the hottest cyber industry themes. The most prominent theme of this year’s conference was undoubtedly AI, which comes as no surprise.
Given the rapid advancement of AI systems, it’s important to highlight the role cyber policy can play in supporting private sector innovation. Most policy talks—represented by the Federal Bureau of Investigations (FBI) and the Cybersecurity and Security Infrastructure Agency (CISA)—focused on the difficulties with information sharing, the ethics of whistleblowing and cyber legislation, and ongoing, thankless efforts to navigate attacks against critical infrastructure—yes, we’re still talking about Colonial Pipeline and SolarWinds.
But the relationship between the policymaking and engineering communities is already riddled with tension, and the federal government occupied a much smaller presence at this year’s RSA conference, as previously acknowledged. If RSA was any indication of the state of public-private partnerships, we are observing a ‘rebalancing’ of government and industry powers. The more the U.S. government withdraws from industry-led security convenings, the more big tech can fill that vacuum.
This isn’t necessarily good or bad, but it does stress the importance of acknowledging which voices take up the most space in these conversations. As big tech gets bigger, especially with the increase in acquisitions of AI startups, industry carries a lot of weight in policy discussions and that is a tremendous responsibility not to be taken lightly.
There’s always room for improvement in strengthening our national security toolkit and acknowledging the balance between industry and government voices. At the same time, there are also existing efforts that have yielded results and deserve acknowledgment: the U.S. government has successfully been able to steward innovation and information sharing through their investments, partnerships, and even regulatory bills.
Just as R&D funding and partnerships with industry can spur innovation, government regulatory agencies also have the power to facilitate information sharing that will demystify the threat landscape. For better or for worse, regulatory policy is a tried-and-true forcing function to bring industry and policymakers to the proverbial table. Above all, offensive operations need to be a normalized point of discussion as we continue to formalize U.S. cyber policy because our national security depends on it.
Policy should drive innovation, not hinder it. We must work together and that includes you. Email me directly at daria@dreadnode.io if you want to hear more about Dreadnode's take on national security and AI policy—or with suggestions on what you'd like me to cover in this series.