AI Policy

From Compute to Congress: The Cyber Layer Beneath the Genesis Mission

December 1, 2025
Daria Bahrami
SHARE

Last week, the White House released an Executive Order launching the Genesis Mission—a “dedicated, coordinated national effort” focused on addressing some of the most pressing scientific challenges of our time using transformative AI-enabled solutions. But underlying the continued national push for AI innovation and adoption is something that deserves dedicated attention: cybersecurity.

The advancements of AI systems and cyber capabilities are often treated as separate, even competing, focus areas—as if advancing one means neglecting the other. But they are inextricably linked, and it is imperative that any national efforts towards faster, leaner, and more reliable AI advancements strongly echo the understanding that robust cybersecurity is foundational—not an afterthought. 

Dreadnode’s research lies at the intersection of AI and cybersecurity, so we are no stranger to the thesis that AI capability research and development (R&D) is essential to navigating the current digital threat landscape. We've been tracking this policy trajectory in our 'From Compute to Congress' series, and the Genesis Mission represents a significant inflection point. By calling for the advancement of AI-enabled science within the Department of Energy and its national laboratories, the White House is presenting an opportunity to prioritize funding and operational consistency for work that the national laboratories have been pushing forward since their inception. 

To that end, the Genesis Mission identifies foundational elements to advancing the state of the art in AI systems. It acknowledges that we, as a nation, must prioritize the development of high-performing AI systems for critical use cases across advanced scientific domains, if we want to stay at the forefront of the race for global AI dominance. 

One of these foundational elements includes the development of “the American Science and Security Platform,” which will functionally enable: testing and development of advanced AI systems built on high-performance computing resources, comprehensive modeling and analysis frameworks, domain-specific foundation models mapped to prioritized scientific domains, and secure access to relevant and scoped datasets. These efforts require consistent funding, staffing, and depoliticization as advanced AI systems will steer America’s digital posture—not only across scientific advancements, but also in the way of cyber resilience.

The Missing Piece: Adversarial Resilience 

Winning the AI race will require resilient systems that can appropriately withstand a quickly evolving digital threat landscape. In the cyber domain, AI systems present uncharted territory: a speed and scale opportunity that can only be tapped into with high-performing Large Language Models, AI agents, and other machine-speed solutions. Dreadnode’s research has revealed emerging AI-enabled cyber attack vectors, including how an adversary might exploit an LLM built into a victim device (such as a CoPilot+ PC) to remotely control the target and make the AI system do their dirty work. 

Because of the size and nature of our company, Dreadnode has been able to creatively investigate cyber capabilities through Strikes, our in-house platform that allows safe testing of adversarial tactics in an airgapped environment. Having an AI R&D testbed like Strikes has enabled the development of offensive agents. Our team has reverse engineered open source developer platforms to identify vulnerabilities, experimented with code execution and automation capabilities using dynamic languages such as Python, and scanned various local and remote file systems (such as GitHub) for sensitive data leaks. We’ve also been able to work with advanced AI systems to conduct AI-driven network operations and campaigns, multimodal AI red teaming, and jailbreaking. 

For us, capability development can help security researchers and engineers map the threat landscape and enhance understanding of how threat actors might weaponize the tools we’re building. In turn, this research informs defensive priorities and can lend to the development of more resilient systems. But not all organizations have these capabilities and workflows built in, which makes it harder to rigorously stress test AI systems for edge cases before they’re released to the public. 

That’s one reason why the prioritized and continued support of the Department of Energy and its national laboratories is so important. The national labs currently offer dedicated AI testbeds at seven national labs, which have been straining to fill as much of the R&D workflow gap as possible. If the Genesis Mission can successfully draw enough attention to warrant the expansion of nationally funded AI testbeds, then that would enable more, long overdue investment in critical scientific domains. Specifically, this Executive Order prioritizes R&D efforts across: advanced manufacturing, biotechnology, critical materials, nuclear fission and fusion energy, quantum information science, and semiconductors and microelectronics. Each of these sectors presents unique challenges, requires carefully secured datasets, and demands more than a generic, one-size-fits-all AI system.

The Stakes

As the Executive Order suggests, we’re navigating a global digital domain where each nation state and their proxies have their own AI development pipelines, resulting in asymmetric AI-enabled cyber capabilities. Continuing to operate in the digital domain requires careful consideration of how our adversaries might be able to outperform or compromise our AI systems. 

In the case of the prioritized domains listed earlier, any AI integrations or developments will require deliberate tuning and testing. All of this research is contingent upon fully scoped and comprehensive datasets, meaning the quality of data inputs is critically important to ensure that the quality of data outputs is reliable and measured. The key is to minimize noise. In other words, data that can help secure a nuclear facility is not the same data that can help you plan dinner. And that cleanup effort makes all the difference in AI system performance. 

A Moment of Convergence

For the last few years, we have seen an exponential surge in the national call for investments in AI innovation and adoption. This year alone we have inherited several Requests for Information (RFIs) seeking industry and stakeholder input, Executive Orders related to AI research and development investments, and the AI Action Plan. 

With the launch of the Genesis Mission and the surrounding push for transformational AI solutions, the prioritization of an AI-enabled cyber strategy is not optional. Nationally, we are actively participating in digital 4D chess with the rest of the world. Our cybersecurity cannot be bolted on after the breakthroughs arrive. 

The Executive Order's focus on domain-specific foundation models and secure datasets is a start. But those systems will only be as strong as the adversarial testing that stress-tests them before deployment. This requires investment in offensive security research—not just within the government labs, but across the broader ecosystem of companies and researchers already doing this work. 

Dreadnode has dedicated its time to building the infrastructure and methodology to stress-test AI systems under adversarial conditions. As the Genesis Mission takes shape, we are eager to see cybersecurity woven into its foundation and to continue to contribute to this broader effort alongside others in the field. 

We don’t lack ambition. We lack the rigorous stress-testing to back it up.

Copy