VulnCon 2026: From Finding Vulnerabilities to Preventing Them
Table of Contents
Where vulnerability management meets secure design.

TL;DR: VulnCon 2026 was all about data quality, root causes, and context. The CVE program is entering a "quality era," pushing for better, machine-readable records. MITRE is investing heavily in CWE root cause mapping so teams understand why vulnerabilities happen, not just that they happen. AI showed up everywhere, from agentic triage tools to AI-generated bug reports flooding PSIRTs. Prioritization remains the unsolved problem, and multiple talks argued that organizational and architectural context is the missing ingredient. Supply chain risk data was sobering (88% of analyzed binaries had vulnerabilities), and the exploit timeline keeps compressing. The through-line: the industry is moving from "find and fix" toward "understand and prevent." That shift toward prevention, root causes, and design-time decisions is exactly the direction we have been building toward at DevArmor.
Last week we were in Scottsdale for CVE/FIRST VulnCon 2026, the annual gathering of vulnerability management practitioners, researchers, and policymakers. Over 500 people showed up across four days to talk about CVE quality, exploit prediction, coordinated disclosure, AI in security, and where the whole vulnerability ecosystem is headed.
DevArmor lives in the application security and secure design space, while VulnCon is squarely focused on vulnerability and exposure management. But that's exactly why it was worth attending. A lot of the conversations at VulnCon are upstream or downstream of the problems we work on every day, and several of the themes this year confirmed something we've been saying for a while: the industry is starting to realize that finding vulnerabilities was never the hard part. Here is what stood out.
The CVE "Quality Era"
.png)
If there was one overarching theme at VulnCon this year, it was data quality. Multiple sessions tackled the idea that CVE records, as they exist today, are not good enough for the downstream consumers who depend on them.
Lindsey Cerkovnik from CISA and Alec Summers from MITRE opened the conference with a session titled "The CVE Program Quality Era," making the case that strengthening trust and impact in global vulnerability data is the program's current priority. Jerry Gamblin (Cisco) and Jay Jacobs (Empirical Security) introduced a Data Quality Assessment Framework they're calling "DQAF" (pronounced "decaf"), which separates the quality of the CVE record schema from the quality of how CNAs actually populate those fields, and scores them across completeness, accuracy, consistency, and machine usability.
Bob Lord and Jay Jacobs also ran a session called "CVE as a Product: Inside the Consumer Working Group," which was refreshing. The framing was simple: the CVE ecosystem is a global public resource, but it has rarely been treated like a product with users who have real needs. The Consumer Working Group is trying to change that with structured user research, analysis of downstream workflows, and recommendations for more predictable, machine-friendly records.
This matters beyond vulnerability management. If the base layer of vulnerability data is incomplete or inconsistent, everything built on top of it suffers: scoring systems, prioritization engines, compliance workflows, and any AI or automation that ingests CVE data. Better inputs lead to better outputs. We think about that a lot at DevArmor, where we apply it to threat modeling rather than CVE records. Petra wrote about this recently in her piece on how input structure is the most important variable in LLM-assisted threat modeling. The same principle applies here.
Root Cause Mapping: From "What" to "Why"
One of the most substantive sessions was the deep-dive workshop on CVE-to-CWE root cause mapping, led by Connor Mullaly and Steve Christey Coley from MITRE. The core argument is that knowing a vulnerability exists is not enough. Teams need to understand why it exists in order to prioritize, remediate, and prevent recurrence.
Alec Summers put it clearly in a quote captured by ChannelLife after the conference: "What's changed is that CWE is now becoming a more integral part of vulnerability disclosure itself, as the value of transparent root-cause mapping is more widely appreciated. Simply knowing that a vulnerability exists isn't enough; teams need to understand why it exists in order to prioritize, remediate, and prevent recurrence."
This is significant because it signals a shift in how the vulnerability ecosystem thinks about its purpose. Historically, CVE has been about identification: this vulnerability exists, in this product, with this severity. CWE root cause mapping adds a second question: what category of weakness produced this vulnerability, and how do we stop making the same mistake?
That second question is where the conversation starts to overlap with what we work on. If you trace the root cause of a vulnerability back far enough, you often land on a design decision. A missing authorization check, a broken trust boundary, an assumption about how data flows between services that turned out to be wrong. These are not things a scanner catches after the fact. They are decisions made (or not made) at design time. The fact that the CVE ecosystem is now investing in understanding root causes is a healthy sign that the industry is thinking beyond detection.
These are not things a scanner catches after the fact. They are decisions made (or not made) at design time.
Their companion session, "From Roadmap to Results: Measuring CWE Adoption to Enable Prevention," went further. The title tells you everything: the goal is prevention, not just classification. That is a direction we strongly agree with.
AI Was Everywhere (and Honestly Discussed)
AI showed up in almost every corner of the agenda, and to the credit of the organizers, the discussions were practical rather than hype-driven.
The best thing cybersecurity professionals can do for their AI colleagues is to be clear about which existing norms and processes still apply, rather than building parallel processes that duplicate what already works.
Jonathan Spring from CISA gave a keynote on Thursday titled "AI Systems Are Software Systems," which made a case that a lot of the vulnerability management processes we already have (SBOM, SSDF, coordinated disclosure, triage) work perfectly well for AI-related vulnerabilities. His point was that the best thing cybersecurity professionals can do for their AI colleagues is to be clear about which existing norms and processes still apply, rather than building parallel processes that duplicate what already works.
On the other end, Khushali Dalal from VulnCheck ran an interactive session called "AI Is Writing Your Bug Reports. Can You Tell?" that addressed the surge of AI-generated vulnerability submissions. These reports are often polished and technically plausible but sometimes entirely wrong. The session walked participants through real-world-inspired reports and asked them to figure out which were human-written, AI-generated, or mixed. The challenge for PSIRT teams is no longer detecting AI use; it is preserving signal quality as AI becomes embedded in researcher workflows.
Chris Farrell and Raaghavv Devgon from Salesforce presented on using agentic AI to scale PSIRT triage, and Snir Ben Shimol from ZEST Security talked about what they learned when AI analyzed tens of millions of vulnerabilities. Jorge Gimenez from Kraken shared their approach to automating vulnerability triage context retrieval with AI agents.
The takeaway across all of these sessions: AI is a tool that amplifies whatever you point it at. Point it at well-structured vulnerability data and you get better triage. Point it at poorly structured data and you get faster noise. This is the same dynamic we see in threat modeling. The model is only as good as what you feed it.
Prioritization Is Still the Unsolved Problem

Several sessions dug into the fact that having more vulnerability data has not made prioritization easier.
GitHub's Sophia Sanles-Luksetich and Zachary Goldman presented "Flipping the Criticality Funnel," describing how they built a unified risk scoring model that normalizes findings across 20+ heterogeneous sources and hundreds of thousands of daily alerts. Their insight: combining CVSS with threat-driven metrics like EPSS and KEV, plus asset-specific context, is what turns raw findings into something actionable. When your critical alerts outnumber every other severity, you haven't prioritized anything.
Ertugrul Yaprak and Mehmet Kilic from Picus Security made a related point in their talk "Organizational Context Matters," arguing that security control effectiveness matters for vulnerability prioritization. Not every vulnerability is equally dangerous in every environment, and prioritization that ignores organizational context produces the wrong ranking.
This is a problem we think about differently. Most prioritization conversations start after the vulnerability is found and ask: which of these should I fix first? We think the more interesting question is: what architectural decisions would have made this vulnerability irrelevant, or at least contained its blast radius? That is not a vulnerability management question. It is a design question. But it is encouraging to see the vulnerability management community acknowledging that context, especially organizational and architectural context, is essential for making good decisions.
We think the more interesting question is: what architectural decisions would have made this vulnerability irrelevant, or at least contained its blast radius?
The Weaponization Gap

Saeed Abbasi from Qualys presented "The Weaponization Gap: What 20 Million KEV Detections Reveal About Edge Remediation," adding more evidence to something the industry has been waking up to: the window between disclosure and exploitation keeps shrinking.
VulnCheck's Patrick Garrity and Wade Sparks reinforced this in their session on identifying exploited and likely-to-be-exploited vulnerabilities. Scott Moore, also from VulnCheck, pushed back on the popular narrative with "The Myth of the Meteoric Rise in Vulnerabilities," arguing that the picture is more nuanced than the raw CVE count growth suggests.
Regardless of where you land on the exact numbers, the directional trend is clear: the time available to respond to a new vulnerability is getting shorter. We have written about this extensively, particularly around how AI-generated exploits like MOAK can produce working exploits from published CVEs in under 15 minutes. When the exploit exists before the patch does, remediation workflows that start at triage are fighting a losing battle. The only defense that moves faster than exploitation is the architectural decisions made before the vulnerability existed.
The only defense that moves faster than exploitation is the architectural decisions made before the vulnerability existed.
VEX and Transparency
NVIDIA had a strong presence, with Jessica Butler and Kristina Joos presenting on automating VEX (Vulnerability Exploitability Exchange) for scalable, context-aware security, and Kaajol Dhana covering container release automation for regulated environments. The VEX conversation has matured considerably. It is no longer about whether machine-readable exploitability context is useful; it is about how to automate it at scale and integrate it into existing workflows.
The "Three Musketeers" session from Tharros Labs and CISA, covering the interplay between CVE, CSAF, and VEX, was a good overview of how these standards are meant to work together. For organizations that consume vulnerability data at scale, this interoperability layer is becoming critical.
International and Policy Dimensions
VulnCon opened with a joint session between CISA and ENISA, with Lindsey Cerkovnik and Nuno Rodrigues Carvalho discussing their shared commitment to the CVE program, program diversification, internationalization, and infrastructure modernization. Cerkovnik described CVE as a priority for CISA and urged AI companies to play a larger role as AI tools become more important in identifying vulnerabilities.
CERT.PL shared lessons from Poland's experience as a national CSIRT acting as a CVD (Coordinated Vulnerability Disclosure) hub, and JPCERT/CC ran a full-day CVD tabletop exercise. The global coordination side of vulnerability management is complex and often underappreciated, and it was good to see it given prominent space.
What This Means for Us
The push toward root cause mapping says: knowing what is broken matters less than knowing why it broke. The data quality conversations say: better structured inputs produce better outputs. The prioritization discussions say: context, especially business and architectural context, is the missing ingredient. The AI sessions say: automation amplifies whatever you feed it, good or bad.
None of these are arguments for any particular product, including ours. They are observations about where the vulnerability ecosystem is heading. And the direction is toward understanding systems more deeply, earlier in their lifecycle, with better structured information. That is the same direction we have been building toward.
We left Scottsdale more convinced than before that the gap between vulnerability management and application security is narrowing. The people building CVE quality frameworks and the people building threat modeling tools are asking versions of the same question: how do we move from reacting to vulnerabilities to preventing the conditions that create them?
That is a question worth working on together.
.png)
Table of Contents
Subscribe

