Skip to main content

Secure SDLC Lessons Learned: #2 Assessment Toolchain

March 22, 2017

In this blog series, I am discussing five secure software development lifecycle (SDLC) “lessons learned” from Optiv’s application security team’s conversations with our clients in 2016. Please read previous posts covering:

Assessment Toolchain

Secure SDLC Lesson 2: Assessment Toolchain

Most organizations would agree that maintaining a fast, predictable flow of planned work (e.g. projects, scheduled changes) that achieves business goals while minimizing the impact of unplanned work (e.g. bug fixes, outages) is the ultimate IT goal. Security assessment activities should be part of planned work, and to accomplish that, the right tools must be selected. As an extension, the tools must be properly configured and integrated into the secure SDLC program to truly be effective.


Multiple studies have shown that identifying and remediating security flaws as early as possible in the development lifecycle is the least costly option. This is often referred to as “shifting security left.” Enabling developers to run static code analysis tools from their IDEs to identify possible security weaknesses is one common way to achieve the shift. The integration of dynamic, runtime and interactive assessment tools into various pre-production stages of the SDLC is another common approach.

That said, all tools are not created equal. SAST, DAST and IAST tools support a wide variety of server-side and client-side languages and frameworks. Some include features like incremental code scanning and tunable rules. Others offer integration with build processes and defect tracking systems. Consider separating must-haves vs. nice-to-haves, and if possible avoid vendor lock-in.

Also understand the capabilities and limitations of the tools in question. Get a feel for where the gaps are and how to address them. There will be gaps, and it’s important to understand what they are. Consider conducting tool bake-offs against a representative subset of your application code to determine the suitability of each tool to your environment. 


Once tool selection is complete, and limitations/capabilities are identified, the next step is tool configuration. This step is absolutely essential for minimizing the amount of false-positives while maximizing code coverage. Consider that oftentimes a wide range of tools with overlapping capabilities is necessary for complete or “manageable” coverage, and confidence that they’re functioning as expected. 

Identifying the right tools takes time and effort. Subject matter experts are usually necessary to help vet and verify issues identified and tune rulesets. Scanners may not support legacy code, and sometimes re-platforming is a less costly option. On the flip side, scanners can lag behind the latest available frameworks and languages.  In both cases, code quality tools and custom test scripts are used to fill the gap in security capability, especially for teams doing continuous integration/continuous deployment.


Once the toolchain is operational, its output should obviously be consumed in some manner. For instance, output from security scanners embedded in test and staging environments will typically feed into defect tracking systems. Security issues caught earlier in the cycle may not. It’s important though to consider recording the source/stage where each vulnerability was identified, and by what tool. We will visit this question again when we talk about metrics.

Based on security thresholds defined in the SDLC security standards, security gates will be implemented to break the build/release process when issues of a certain severity or higher are identified by security tools. Additionally, because security testing may not be wired in series with the release workflow, a process to address issues caught by parallel scans and out-of-band penetration tests should be defined. 

The end goal is to build a reliable and trustworthy toolchain with quality automated tools that gives sufficiently wide coverage across the application portfolio. To fill the remaining gaps, manual testing and other controls will be required. 

In the next post I will cover secure SDLC lesson #3, knowledge management. 

    Shawn Asmus

By: Shawn Asmus

Practice Manager, Application Security, CISSP, CCSP, OSCP

See More

Related Blogs

March 14, 2017

Secure SDLC Lessons Learned: #1 Application Catalog

Building an application catalog is a critical step towards maintaining governance over a secure SDLC program. The primary purposes of the catalog are ...

See Details

April 05, 2017

Secure SDLC Lessons Learned: #3 Knowledge Management

The term “knowledge management” (KM) refers to using vulnerability mining to turn remediation into lessons learned. Essentially this involves taking k...

See Details

March 14, 2018

Observations on Smoke Tests – Part 1

Smoke testing in the traditional definition is most often used to assess the functionality of key software features to determine if they work or perfo...

See Details

How Can We Help?

Let us know what you need, and we will have an Optiv professional contact you shortly.

Privacy Policy


May 09, 2018

Application Security

Learn how Optiv can help protect your most critical enterprise applications from both internal and external threats.

See Details

March 29, 2017

Attack and Penetration Services

Learn how our experts work to expose weakness to validate your security program.

See Details

December 15, 2017

Endpoint Security Assessment

Optiv can help validate the effectiveness of your endpoint security solution by identifying and exploiting vulnerabilities.

See Details

Stay in the Know

For all the latest cybersecurity and Optiv news, subscribe to our blog and connect with us on Social.


Join our Email List

We take your privacy seriously and promise never to share your email with anyone.

Stay Connected

Find cyber security Events in your area.