Risk-Based Vulnerability Management Changes the Game

Risk-Based Vulnerability Management Changes the Game

Part of Optiv's Partner Series.


In this guest post, Nathan Wenzler, Chief Security Strategist for Tenable, describes how Risk-Based Vulnerability Management begins with richer data and ends up reducing risk in a more efficient and meaningful way than legacy vulnerability management programs.

 

What’s Old is New Again

 

If you’ve worked in the Information Security space for more than a minute, you know there’s a constant deluge from vendors of silver bullets and do-it-all solutions to solve everyone’s security needs. It’s almost become a living meme at major conferences to find out what the buzzword of the year is going to be and to see who’s going hype it without addressing real problems with a meaningful solution. For most InfoSec professionals, this is especially problematic since we know most organizations still struggle with performing basic cyber hygiene, which would address far more risk than the latest and greatest miracle product.

 

“But, wait,” you might be saying, “we can’t keep doing the same old thing because the landscape of threats and our own networks is expanding and evolving constantly”! You’d be right to say that, of course, which is why it’s important to make sure we’re evolving our fundamental security practices and programs, too.

 

In that spirit, let’s take a look at how one of the most fundamental and effective tools to reduce risk and improve our security posture is evolving to meet the needs of today’s organizations: vulnerability management.

 

Moving from Legacy Methods to a Risk-Based Approach

 

Vulnerability management isn’t a new tool in the security pro’s toolbox, but the way we’ve approached it hasn’t changed much in 20 years. Even today, many organizations have workflows that look something like this:

 

tenable_img_1

 

  1. Run a network scan against servers and maybe desktops, usually without authentication.
  2. Glance at the findings and see the usual list of missing patches, possibly skimming the configuration issues or informational data that could represent other potential issues and attack vectors.
  3. Export the entire dataset to .csv (or even PDF) and send to the server admin or patching team.
  4. Sleep soundly over the weekend knowing the report is out of your inbox and in the hands of the remediation folks, so there’s nothing left for you to worry about.

 

Admittedly, most formal playbooks don’t spell the process out in such a tongue-in-cheek way, but what we see time and time again is that it plays out like this in more organizations than any of us would care to admit. When we were dealing with a handful of servers in our data centers, this was a fairly reasonable approach. But as the scope and scale of today’s modern computing environment has expanded into cloud infrastructure, mobile devices, IoT devices, ICS and SCADA systems in Operational Technology environments, web applications, and much, much more, it simply doesn’t scale into a reasonable approach to getting your arms around the problem of mitigating vulnerabilities.

 

Think of it this way: If we only scanned 100 systems and each had 10 vulnerabilities, we’d have 1,000 problems to address, which is still a lot, but manageable. But in today’s environments, you’re likely assessing thousands of assets, and with proper depth of detail, are identifying far more vulnerabilities than ever before. Sending a report to your remediation teams with hundreds of thousands of vulnerabilities to fix isn’t feasible and is a surefire way to ensure that the problems aren’t getting addressed.

 

Not only do we have to expand on the ways we collect vulnerability information beyond network scans, but we’ve got to prioritize the remediation efforts to address the vulnerabilities which present the most technical risk to our most critical assets. It requires a reframing of the way we approach this problem and it’s why transitioning from a legacy vulnerability management program to a formal Risk-Based Vulnerability Management (RBVM) program elevates this critical security tool into a powerful, scalable part of your arsenal.

 

There are two keys to making RBVM work:

 

  1. Prioritization: Identify which vulnerabilities pose the most risk to your organization and focus on remediating the most dangerous ones first.
  2. Asset Criticality: Identify which assets in your environment are most important to your mission-critical business functions.

 

In short, we’re identifying the most dangerous vulnerabilities on the most critical assets so we can surface a more focused, concise list of risk points to remediate. It’s a fundamental difference of a Risk-Based VM program over a legacy VM effort because it marries the technical risk of the vulnerability itself and the non-technical business context of which assets are most important to your organization so you can create a truer picture of your risk landscape.

 

Of course, if we’re doing to evolve our entire program, we also have to evolve the pieces that make the program work. So, let’s discuss how changing the way we prioritize vulnerabilities can better support these efforts.

 

CVSS is Dead, Long Live CVSS

 

In 2005, version 1 of the Common Vulnerability Scoring System (CVSSv1) was released to create a standardized method of identifying the technical severity of vulnerabilities found in software. Over time, additional revisions to the methodology of how the scores are calculated and the criteria used to determine severity have been released, most recently with CVSSv3.1 (released in June 2019). CVSS represented the first step toward providing security professionals a common language in which to talk about the severity of a vulnerability between teams and providing a simple, quantifiable score to measure against. For years, frameworks and regulations have been built around this system, and in most organizations it’s still used as the primary tool to measure the severity of a vulnerability.

 

The challenge with CVSS, though, is in that it doesn’t scale or change to accommodate the lay of the land for real-world threat scenarios. Once a score is assigned to represent the technical severity of a vulnerability, that score remains going forward. More recent versions of CVSS have attempted to accommodate some of the variables presented in modern infrastructures by adding Temporal and Environmental scores, but these are also fairly static and, frankly, aren’t commonly used in most organizations (and even if they are, they don’t accurately reflect the state of things in terms of what threats in the real world are taking advantage of said vulnerability).

 

So, where does that leave us? Well, first off, the number of vulnerabilities identified in various software platforms and operating systems is increasing. Just take a look at some of the numbers the Tenable Research Team identified over the past 20 years, which show how dramatically the number of vulnerabilities has been increasing over time.

 

tenable_img_2

 

In the last three years, we’ve seen nearly three times as many vulnerabilities. When you extrapolate this across the growing number of devices in your environment, it’s easy to see how trying to fix everything at the same time becomes unmanageable.

 

Next, we may try to identify those vulnerabilities which are critical as a way to prioritize and focus our efforts to remediate first. Most often, what we see is organizations adopting a version of the Payment Card Industry’s guidelines, which require fixing any vulnerability with a CVSS score of 7.0 or higher (high or critical severity). If you applied that threshold to 2019, you’d discover that 56% of vulnerabilities identified had a CVSS score of 7.0 or higher. If you’re trying to prioritize and more than half the targets you identify fall into your focused remediation parameters, you’re still not helping yourself address what actually matters and reduce risk.

 

Merely using the CVSS score as a means to prioritize efforts simply isn’t enough. It’s only when we start to apply more detailed and context-specific criteria that we start to create a better data model for identifying what presents a real risk and what doesn’t. Even at a fairly basic level, we can see dramatic differences in the approach. For example, if we looked at which vulnerabilities had known exploits in the wild available for attackers to use, we’d find numbers that look more like this:

 

tenable_img_3

 

Roughly, about 20% of High and Critical vulnerabilities have known exploits in the wild. Since the potential for an active attack is so much higher with a known exploit being shared and/or sold, these vulnerabilities present much more risk than those which have not yet had exploits discovered and shared to the broader community of cyber criminals and attackers. Even integrating this very simple data point into how we view the true risk a vulnerability poses creates a much more manageable and relevant picture of your threat environment and what the target area needs to be for remediation efforts.

 

Effective prioritization comes from a better data model which incorporates more than simply the technical severity of the vulnerability itself. It incorporates concepts such as exploitability, availability of exploit kits, real-world threat intelligence and much more. However you build this model, it should represent a dynamic set of criteria that accounts for how the threat landscape changes based on the way attackers are targeting these vulnerabilities. And the more data points you can use, the better your model will represent the true severity of the threat posed by a vulnerability, which in turn gives you a much better means to make better decisions around what to prioritize and how to drive remediation efforts.

 

To give you an example of what a threat model for vulnerabilities can look like, here’s what we’re doing in our products to calculate more relevant threat for asset vulnerabilities.

 

Going Beyond CVSS with Vulnerability Priority Rating (VPR)

 

Conceptually, the components of the data model that are factored in to creating a Vulnerability Priority Rating, or VPR, make a lot of sense when you think about it. The “trick” is being able to bring together larger numbers of datasets and weigh them appropriately in order to correctly reflect the threat posed in the here and now. At its core, here are some of the threat intelligence sources used to make up the VPR model:

 

  • CVSS: We start with the rating that’s been used in the industry the longest to establish a baseline technical severity.
  • Public Vulnerability Databases: Sources including the National Vulnerability Database and resources provided by NIST and MITRE provide additional context around the scope and technical issues posed by the vulnerability.
  • Tenable Research Team Reconnaissance: We leverage an internal team of ~100 researchers who test vulnerabilities to identify ways to exploit them, research new zero-day vulnerabilities (in 2019 alone, we identified almost 150 unique zero-days) and monitor criminal and attack activity on the dark web and in other parts of the public Internet to determine if the immediate threat posed by a vulnerability is becoming more severe or is waning.
  • Third-Party Threat Intelligence: Getting additional eyes on the behavior of cyber criminals and attackers provides more real-time context around any attack activity that is currently or about to take place. We tap into several private feeds to expand the scope of monitoring that our own research team performs.
  • Historical Trend Analysis: In addition to watching what threats are happening today, we factor in past hostility and other previous threat patterns that may have taken advantage of a particular vulnerability. These can often be indicators of what future attacks may look like, albeit with slight tweaks.

 

Altogether, we take approximately 150 separate data points across all of the various sources and compile a score that represents the threat we’re seeing in the here and now. And because there is a constant monitoring and review process, it means the VPR can change and will dynamically adjust to represent any modifications found in the overall threat activity or perceived likelihood of exploit. To give you an example of what that looks like, here’s a scoring trend for the vulnerability exploited by one of the more sophisticated and damaging pieces of ransomware we’ve seen: Sodin.

 

tenable_img_4

 

Note how the CVSS scores never changed, despite the variations in activity over the first eight months of 2019, as opposed to the dynamic adjustments made to the VPR, accounting for the various types and intensities of threat activity identified over time.

 

From a remediation standpoint, this creates a much better methodology for identifying those vulnerabilities which should be addressed first. If only taking into account the base technical severity of a CVSS score (7.2 or 7.8, depending on which version of CVSS is used), it may be determined that this is less of a threat and sent to the back of the line in terms of remediation priority. But when factoring in all the threat intelligence, additional data sets and research insights, we create a much more accurate representation of the risk a vulnerability poses, giving us a headstart on those weaknesses that are likely to be exploited by ransomware or other attacks.

 

It should be noted that this does shake up the status quo of how we’ve typically done remediation over the past two decades, as it requires us to be more dynamic in our decision making and prioritization efforts and to work toward faster response times to address those vulnerabilities which present significant risk. However, from an overall risk reduction standpoint, the change in process means you can literally reduce huge amounts of risk while targeting the smaller number of truly critical vulnerabilities. In our research, we’ve seen organizations reduce the total number of vulnerabilities they need to address by 97% while achieving the same reduction to their attack vectors and overall threat posture versus trying to boil the ocean and remediate everything at once.

 

There’s Much More to the Risk-Based Vulnerability Management Approach

 

Prioritizing vulnerabilities with a richer level of threat intelligence data is only one part of the equation. To really move toward a risk-based approach to your vulnerability management program, you’ll need to incorporate more techniques and functionality to help you better deal with today’s environments, such as:

 

  • Asset Criticality: As mentioned above, identifying which assets are most critical to your organization lets you make the technical risk relevant to the business.
  • Expanding Vulnerability Data Collection: You’ll want to add more ways of collecting vulnerability data beyond network scanners. That includes agents, passive monitoring, connectors to cloud platforms and OT querying engines. The more data you have, the better decisions can be made about where to focus remediation efforts.
  • Relevant Metrics: RBVM programs make more use of time-based and business context metrics as opposed to volumetric data. Think less “There are 22,000 fewer vulnerabilities” and more “We reduced overall risk to the organization by 37% last month.”

 

It seems like a lot, but we’ve been moving toward this kind of maturity in vulnerability management for a long time. Faced with the modern threat landscape, we need to adopt these kinds of models more quickly than ever before. And if that’s not enough incentive, let me close with a finding from a recent McKinsey on Risk report from November 2019 where the firm studied organizations who took a risk-based approach to dealing with threats and concluded that “…by simply reordering the security initiatives in its backlog according to the risk-based approach, [the organization] increased its projected risk reduction 7.5 times above the original program at no added cost.

 

As you can see, adopting a Risk-Based Vulnerability Management program can make your teams more effective and efficient, reducing the overall risk in your environment and demonstrating tangible cost and resource savings over less mature legacy approaches to vulnerability management efforts.

 

Turns out you can teach the old dog of vulnerability management new tricks.

Nathan Wenzler
Chief Security Strategist | Tenable
Nathan Wenzler is the Chief Security Strategist at Tenable, the Cyber Exposure company. Nathan has over two decades of experience designing, implementing and managing both technical and non-technical solutions for IT and information security organizations. He has helped government agencies and Fortune 1000 companies alike build new information security programs from scratch, as well as improving and broadening existing programs with a focus on process, workflow, risk management and the personnel side of a successful security program. Nathan brings his expertise in vulnerability management and Cyber Exposure to executives and security professionals around the globe in order to help them mature their security strategy, understand their cyber risk and measurably improve their overall security posture.