Artificial Intelligence & Machine Learning
,
Business Continuity Management / Disaster Recovery
,
Cloud Security
Colton Malkerson of EdgeRunner AI on How Edge AI Offers Resilience

The dominant mindset in artificial intelligence has been cloud-first: vast compute resources, always-on connectivity and centralized control. But increasingly, technologists are rethinking whether the future of AI must be tied to the cloud at all.
See Also: On Demand | From Patch to Prevention: Modernizing Remediation Across Hybrid Environments
In an interview with Information Security Media Group, Colton Malkerson, co-founder and COO at EdgeRunner AI, discussed the challenges of deploying AI in contested, low-connectivity environments and how edge-first intelligence is being engineered to meet mission-critical demands.
Prior to founding EdgeRunner, Malkerson held leadership roles at Stability AI and Amazon Web Services, bringing deep expertise in enterprise sales, AI infrastructure and government engagement.
Edited excerpts follow:
EdgeRunner is building generative AI tools that run entirely at the edge, a sharp contrast to the cloud-first mindset that has dominated AI. What core problem were you trying to solve when you decided to take this contrarian path?
When we started EdgeRunner, we saw a fundamental disconnect between the environments AI was being built for and the environments where it was most urgently needed. Warfighters, first responders and critical operators often work in low-connectivity or contested environments where sending sensitive data to the cloud is either difficult or unsafe due to data privacy, data security or operational security risks. The “always-online” or “always connected” assumption baked into most AI applications simply doesn’t work for many critical use cases.
Instead of telling customers they needed to bring their data to the AI in the cloud, we decided to bring AI to the data where it’s created or resides, locally on-premises or at the edge. We flipped the model by bringing intelligence to the edge, making it self-contained, secure and ready to operate with zero dependency on the cloud. That’s not just a performance advantage in terms of latency, but in defense and sensitive use cases, it’s a requirement.
How do you handle data validation, augmentation and feature engineering for edge models under the strict demands of defense use cases?
We’ve assembled a 30B token dataset of military data, including doctrine, history, policy, user manuals and other relevant data. This forms the base of our military-specific LLMs, which we fine-tune and optimize for our product. Then we build out military occupational specialty, or MOS, or Air Force Specialty Code, AFSC, specific adapters that are effectively plug-ins into the product for specific users based on role and mission, such as logistics, operations, maintenance, cybersecurity, and combat medicine. We start by working directly with operators and end users to define exactly what “mission relevant” data looks like for their environment or use case. From there, we focus on collecting and validating data within secure, closed systems so it never leaves the trusted perimeter. Sometimes the textbook answer is not the best real-world answer. That’s a collaborative effort with the end users to ensure every feature we engineer and response we give has a direct link to mission success.
What earned you recognition as an xTech AI Grand Challenge finalist, and how is it shaping your current Department of Defense engagements?
The xTech AI Grand Challenge was looking for solutions that could fundamentally change how AI works in operationally constrained environments. We brought forward a working system, not a slide deck, called EVELYN, which is an AI-powered platform that automates dataset curation at scale, effectively a more secure and cost-effective replacement for expensive systems, such as Scale AI. EVELYN automatically identifies, classifies and labels objects of interest, reducing the burden of manual dataset creation and validation. This is a huge problem in the military, where drone footage and other data feeds must be manually reviewed and tagged for dataset creation. That real-world readiness resonated with the judges.
Have enterprises grown too reliant on the cloud for AI, and what blind spots do you see in that model?
Absolutely. The cloud has driven incredible innovation, but it’s created a monoculture in how we think about deploying AI. When your entire stack depends on centralized compute and constant connectivity, you’re inherently vulnerable to outages, latency, bandwidth constraints, and, in defense scenarios, active adversary disruption. The blind spot is that this fragility is invisible until it fails, and by then the cost of that failure can be enormous. We’re proving that edge-first AI isn’t just a defense-sector niche, it’s a resilience model every enterprise should be thinking about.
The defense sector often talks about ‘zero trust’ in cybersecurity. Shouldn’t we be aiming for something similar in AI?
I think that’s exactly where we need to go. With our platform, not only are we fully disconnected from the internet and network, increasing data security and privacy while reducing the attack surface, but you also have visibility into how the actual outputs of the model were generated. You can audit the reasoning, and you can enforce strict guardrails on what the model can and can’t do. Running models at the edge makes this much easier. You’re not relying on a black-box API in the cloud, which may have certain biases, instead, you understand the full stack from data intake to decision-making. That level of control is essential for both safety and accountability.
The line between commercial and military use of AI is blurring fast. As a company operating in this space, how do you navigate the dual-use nature of your tech responsibly?
We consider ourselves a dual-use defense technology company and we also have enterprise customers. Being dual use actually helps us build better products for the military because our products are also tested and validated by commercial customers and partners. In fact, the DoD specifically prefers to buy commercial off-the-shelf, or COTS, software because they also understand the benefits of having commercial sector customers. While we are primarily focused on DoD because it’s such a large and complicated market to serve, we will continue developing our commercial sector partnerships. For too long, the warfighter has been given worse technology that wouldn’t meet the bar for enterprise customers, we’re changing that.
