The Real Remediation Pipeline Starts at Design Time
Table of Contents
AI can now generate working exploits from a CVE in under 15 minutes. Patching can't keep up. The only defense that moves faster than exploitation is the architectural decisions you made before the vulnerability existed.
.png)
Nobody is going to out-patch an AI that writes exploits in fifteen minutes. The question isn't whether your remediation pipeline is fast enough; it's whether your architecture was ready before you needed it to be.
The Real Remediation Pipeline Starts at Design Time
The exploitation timeline has exponentially compressed from roughly 700 days between disclosure and exploitation in 2018, to under 24 hours now. The question is: how do you patch something that has no patch?
The honest answer is you can't. And that's the part we need to sit with.
The numbers are worse than you think
The timeline compression isn't slowing down. According to VulnCheck's data, 32% of known exploited vulnerabilities in the first half of 2025 had exploitation evidence on or before the day the CVE was even issued. Not the day the patch shipped, but the day the CVE was published. A third of exploited vulns were already being weaponized before most teams knew they existed.
Just this past week, MOAK dropped. An agentic AI workflow that generates working exploits from published CVEs in 10 to 15 minutes, at roughly a dollar per exploit. It uses public models, a pipeline, and a CVE advisory as input. Practically anyone can operate it.
A few days earlier, Anthropic previewed Claude Mythos, a model that autonomously discovered thousands of zero-day vulnerabilities across every major OS and browser, including bugs that survived 27 years of human code review. It built a 20-gadget ROP chain split across multiple packets, fully autonomous, to exploit a FreeBSD NFS flaw that had been hiding since 2009.
We have been thrown into the post-patch era.
Triage was never designed for this
Most security teams still operate in a workflow that assumes time is on their side. A CVE drops. Someone triages it. It gets scored, prioritized, slotted into a sprint. Maybe it makes the next release. Maybe it waits for the quarterly patch cycle. Maybe it sits in a backlog behind forty other items that also scored a 9.1.
That workflow made sense when attackers needed weeks or months to build a working exploit. It does not make sense when the exploit exists before the patch does.
This is as much an organizational problem as it is a tooling issue. Remediation workflows require sprint planning, change boards, half a dozen people aligned before anything ships. Attackers have none of that friction. They don't have competing priorities. They don't have a backlog. They don't wait for approval.
The gap between how fast exploitation moves and how fast remediation moves has always existed. AI just made it impossible to ignore.
The gap between how fast exploitation moves and how fast remediation moves has always existed. AI just made it impossible to ignore.
Design is the only thing that moves faster than exploitation
Here's what I keep coming back to: if a third of exploited vulnerabilities are weaponized before you even know about them, then the remediation pipeline can't start at triage. It has to start earlier. Much earlier.
It has to start at design time.
The systems that survive this environment won't be the ones that patch fastest. They'll be the ones that made architectural decisions, months or years before a specific CVE dropped, that limit blast radius by default. Decisions like: how much does any single component trust? What happens when a dependency is compromised? How isolated are the things that handle untrusted input from the things that touch sensitive data?
These aren't novel ideas. Defense in depth, least privilege, zero trust at the application layer. Security architects have been saying this for decades. The difference is that it used to be nice-to-have. A best practice you aspired to. Now it's the only thing standing between you and an exploit that was generated before you finished reading the advisory.
The only reliable defense against zero-day exploitation at machine speed is architectural decisions made long before the CVE exists
The uncomfortable implication
If this is true, if the only reliable defense against zero-day exploitation at machine speed is architectural decisions made long before the CVE exists, then most of the security budget is pointed at the wrong part of the timeline.
We spend enormous energy on detection, triage, and remediation. We spend comparatively little on making sure the systems we're building would be survivable even if a component gets popped.
Nobody is going to out-patch an AI that writes exploits in fifteen minutes. The teams that restructure around this reality, that treat security design review as the first line of defense rather than optional checkpoint, are the ones that will still be standing when the next MOAK or Mythos drops.
The question isn't whether your remediation pipeline is fast enough. It's whether your architecture was ready before you needed it to be.
The systems that survive this environment won't be the ones that patch fastest. They'll be the ones that made architectural decisions, months or years before a specific CVE dropped, that limit blast radius by default.
Table of Contents
Subscribe

