Every Solution You Can Imagine – and More
What cybersecurity solution do you need? From Zero Trust to ADR, IAM, risk/privacy, data protection, AppSec and threat, securing digital transformation, to resiliency and remediation, we can build the right program to help solve your challenges.
A Single Partner for Everything You Need
Optiv works with more than 450 world-class security technology partners. By putting you at the center of our unmatched ecosystem of people, products, partners and programs, we accelerate business progress like no other company can.
We Are Optiv
Greatness is every team working toward a common goal. Winning in spite of cyber threats and overcoming challenges in spite of them. It’s building for a future that only you can create or simply coming home in time for dinner.
However you define greatness, Optiv is in your corner. We manage cyber risk so you can secure your full potential.
Risk-Based Vulnerability Management Changes the Game
Part of Optiv's Partner Series.
In this guest post, Nathan Wenzler, Chief Security Strategist for Tenable, describes how Risk-Based Vulnerability Management begins with richer data and ends up reducing risk in a more efficient and meaningful way than legacy vulnerability management programs.
If you’ve worked in the Information Security space for more than a minute, you know there’s a constant deluge from vendors of silver bullets and do-it-all solutions to solve everyone’s security needs. It’s almost become a living meme at major conferences to find out what the buzzword of the year is going to be and to see who’s going hype it without addressing real problems with a meaningful solution. For most InfoSec professionals, this is especially problematic since we know most organizations still struggle with performing basic cyber hygiene, which would address far more risk than the latest and greatest miracle product.
“But, wait,” you might be saying, “we can’t keep doing the same old thing because the landscape of threats and our own networks is expanding and evolving constantly”! You’d be right to say that, of course, which is why it’s important to make sure we’re evolving our fundamental security practices and programs, too.
In that spirit, let’s take a look at how one of the most fundamental and effective tools to reduce risk and improve our security posture is evolving to meet the needs of today’s organizations: vulnerability management.
Vulnerability management isn’t a new tool in the security pro’s toolbox, but the way we’ve approached it hasn’t changed much in 20 years. Even today, many organizations have workflows that look something like this:
Admittedly, most formal playbooks don’t spell the process out in such a tongue-in-cheek way, but what we see time and time again is that it plays out like this in more organizations than any of us would care to admit. When we were dealing with a handful of servers in our data centers, this was a fairly reasonable approach. But as the scope and scale of today’s modern computing environment has expanded into cloud infrastructure, mobile devices, IoT devices, ICS and SCADA systems in Operational Technology environments, web applications, and much, much more, it simply doesn’t scale into a reasonable approach to getting your arms around the problem of mitigating vulnerabilities.
Think of it this way: If we only scanned 100 systems and each had 10 vulnerabilities, we’d have 1,000 problems to address, which is still a lot, but manageable. But in today’s environments, you’re likely assessing thousands of assets, and with proper depth of detail, are identifying far more vulnerabilities than ever before. Sending a report to your remediation teams with hundreds of thousands of vulnerabilities to fix isn’t feasible and is a surefire way to ensure that the problems aren’t getting addressed.
Not only do we have to expand on the ways we collect vulnerability information beyond network scans, but we’ve got to prioritize the remediation efforts to address the vulnerabilities which present the most technical risk to our most critical assets. It requires a reframing of the way we approach this problem and it’s why transitioning from a legacy vulnerability management program to a formal Risk-Based Vulnerability Management (RBVM) program elevates this critical security tool into a powerful, scalable part of your arsenal.
There are two keys to making RBVM work:
In short, we’re identifying the most dangerous vulnerabilities on the most critical assets so we can surface a more focused, concise list of risk points to remediate. It’s a fundamental difference of a Risk-Based VM program over a legacy VM effort because it marries the technical risk of the vulnerability itself and the non-technical business context of which assets are most important to your organization so you can create a truer picture of your risk landscape.
Of course, if we’re doing to evolve our entire program, we also have to evolve the pieces that make the program work. So, let’s discuss how changing the way we prioritize vulnerabilities can better support these efforts.
In 2005, version 1 of the Common Vulnerability Scoring System (CVSSv1) was released to create a standardized method of identifying the technical severity of vulnerabilities found in software. Over time, additional revisions to the methodology of how the scores are calculated and the criteria used to determine severity have been released, most recently with CVSSv3.1 (released in June 2019). CVSS represented the first step toward providing security professionals a common language in which to talk about the severity of a vulnerability between teams and providing a simple, quantifiable score to measure against. For years, frameworks and regulations have been built around this system, and in most organizations it’s still used as the primary tool to measure the severity of a vulnerability.
The challenge with CVSS, though, is in that it doesn’t scale or change to accommodate the lay of the land for real-world threat scenarios. Once a score is assigned to represent the technical severity of a vulnerability, that score remains going forward. More recent versions of CVSS have attempted to accommodate some of the variables presented in modern infrastructures by adding Temporal and Environmental scores, but these are also fairly static and, frankly, aren’t commonly used in most organizations (and even if they are, they don’t accurately reflect the state of things in terms of what threats in the real world are taking advantage of said vulnerability).
So, where does that leave us? Well, first off, the number of vulnerabilities identified in various software platforms and operating systems is increasing. Just take a look at some of the numbers the Tenable Research Team identified over the past 20 years, which show how dramatically the number of vulnerabilities has been increasing over time.
In the last three years, we’ve seen nearly three times as many vulnerabilities. When you extrapolate this across the growing number of devices in your environment, it’s easy to see how trying to fix everything at the same time becomes unmanageable.
Next, we may try to identify those vulnerabilities which are critical as a way to prioritize and focus our efforts to remediate first. Most often, what we see is organizations adopting a version of the Payment Card Industry’s guidelines, which require fixing any vulnerability with a CVSS score of 7.0 or higher (high or critical severity). If you applied that threshold to 2019, you’d discover that 56% of vulnerabilities identified had a CVSS score of 7.0 or higher. If you’re trying to prioritize and more than half the targets you identify fall into your focused remediation parameters, you’re still not helping yourself address what actually matters and reduce risk.
Merely using the CVSS score as a means to prioritize efforts simply isn’t enough. It’s only when we start to apply more detailed and context-specific criteria that we start to create a better data model for identifying what presents a real risk and what doesn’t. Even at a fairly basic level, we can see dramatic differences in the approach. For example, if we looked at which vulnerabilities had known exploits in the wild available for attackers to use, we’d find numbers that look more like this:
Roughly, about 20% of High and Critical vulnerabilities have known exploits in the wild. Since the potential for an active attack is so much higher with a known exploit being shared and/or sold, these vulnerabilities present much more risk than those which have not yet had exploits discovered and shared to the broader community of cyber criminals and attackers. Even integrating this very simple data point into how we view the true risk a vulnerability poses creates a much more manageable and relevant picture of your threat environment and what the target area needs to be for remediation efforts.
Effective prioritization comes from a better data model which incorporates more than simply the technical severity of the vulnerability itself. It incorporates concepts such as exploitability, availability of exploit kits, real-world threat intelligence and much more. However you build this model, it should represent a dynamic set of criteria that accounts for how the threat landscape changes based on the way attackers are targeting these vulnerabilities. And the more data points you can use, the better your model will represent the true severity of the threat posed by a vulnerability, which in turn gives you a much better means to make better decisions around what to prioritize and how to drive remediation efforts.
To give you an example of what a threat model for vulnerabilities can look like, here’s what we’re doing in our products to calculate more relevant threat for asset vulnerabilities.
Conceptually, the components of the data model that are factored in to creating a Vulnerability Priority Rating, or VPR, make a lot of sense when you think about it. The “trick” is being able to bring together larger numbers of datasets and weigh them appropriately in order to correctly reflect the threat posed in the here and now. At its core, here are some of the threat intelligence sources used to make up the VPR model:
Altogether, we take approximately 150 separate data points across all of the various sources and compile a score that represents the threat we’re seeing in the here and now. And because there is a constant monitoring and review process, it means the VPR can change and will dynamically adjust to represent any modifications found in the overall threat activity or perceived likelihood of exploit. To give you an example of what that looks like, here’s a scoring trend for the vulnerability exploited by one of the more sophisticated and damaging pieces of ransomware we’ve seen: Sodin.
Note how the CVSS scores never changed, despite the variations in activity over the first eight months of 2019, as opposed to the dynamic adjustments made to the VPR, accounting for the various types and intensities of threat activity identified over time.
From a remediation standpoint, this creates a much better methodology for identifying those vulnerabilities which should be addressed first. If only taking into account the base technical severity of a CVSS score (7.2 or 7.8, depending on which version of CVSS is used), it may be determined that this is less of a threat and sent to the back of the line in terms of remediation priority. But when factoring in all the threat intelligence, additional data sets and research insights, we create a much more accurate representation of the risk a vulnerability poses, giving us a headstart on those weaknesses that are likely to be exploited by ransomware or other attacks.
It should be noted that this does shake up the status quo of how we’ve typically done remediation over the past two decades, as it requires us to be more dynamic in our decision making and prioritization efforts and to work toward faster response times to address those vulnerabilities which present significant risk. However, from an overall risk reduction standpoint, the change in process means you can literally reduce huge amounts of risk while targeting the smaller number of truly critical vulnerabilities. In our research, we’ve seen organizations reduce the total number of vulnerabilities they need to address by 97% while achieving the same reduction to their attack vectors and overall threat posture versus trying to boil the ocean and remediate everything at once.
Prioritizing vulnerabilities with a richer level of threat intelligence data is only one part of the equation. To really move toward a risk-based approach to your vulnerability management program, you’ll need to incorporate more techniques and functionality to help you better deal with today’s environments, such as:
It seems like a lot, but we’ve been moving toward this kind of maturity in vulnerability management for a long time. Faced with the modern threat landscape, we need to adopt these kinds of models more quickly than ever before. And if that’s not enough incentive, let me close with a finding from a recent McKinsey on Risk report from November 2019 where the firm studied organizations who took a risk-based approach to dealing with threats and concluded that “…by simply reordering the security initiatives in its backlog according to the risk-based approach, [the organization] increased its projected risk reduction 7.5 times above the original program at no added cost.”
As you can see, adopting a Risk-Based Vulnerability Management program can make your teams more effective and efficient, reducing the overall risk in your environment and demonstrating tangible cost and resource savings over less mature legacy approaches to vulnerability management efforts.
Turns out you can teach the old dog of vulnerability management new tricks.
February 20, 2020
Container technologies allow developers to assemble code and associated dependencies into a single package or container image.
October 11, 2017
Optiv’s managed vulnerability services identify, prioritize and reduce network vulnerability exposure.
Let us know what you need, and we will have an Optiv professional contact you shortly.