Practice Manager, Application Security, CISSP, CCSP, OSCP
Shawn Asmus is a practice manager with Optiv’s application security team. In this role he specializes in strategic and advanced AppSec program services and lends technical expertise where needed. Shawn has presented at a number of national, regional and local security seminars and conferences.
Secure SDLC Lessons Learned: #2 Assessment Toolchain
In this blog series, I am discussing five secure software development lifecycle (SDLC) “lessons learned” from Optiv’s application security team’s conversations with our clients in 2016. Please read previous posts covering:
Secure SDLC Lesson 2: Assessment Toolchain
Most organizations would agree that maintaining a fast, predictable flow of planned work (e.g. projects, scheduled changes) that achieves business goals while minimizing the impact of unplanned work (e.g. bug fixes, outages) is the ultimate IT goal. Security assessment activities should be part of planned work, and to accomplish that, the right tools must be selected. As an extension, the tools must be properly configured and integrated into the secure SDLC program to truly be effective.
Multiple studies have shown that identifying and remediating security flaws as early as possible in the development lifecycle is the least costly option. This is often referred to as “shifting security left.” Enabling developers to run static code analysis tools from their IDEs to identify possible security weaknesses is one common way to achieve the shift. The integration of dynamic, runtime and interactive assessment tools into various pre-production stages of the SDLC is another common approach.
That said, all tools are not created equal. SAST, DAST and IAST tools support a wide variety of server-side and client-side languages and frameworks. Some include features like incremental code scanning and tunable rules. Others offer integration with build processes and defect tracking systems. Consider separating must-haves vs. nice-to-haves, and if possible avoid vendor lock-in.
Also understand the capabilities and limitations of the tools in question. Get a feel for where the gaps are and how to address them. There will be gaps, and it’s important to understand what they are. Consider conducting tool bake-offs against a representative subset of your application code to determine the suitability of each tool to your environment.
Once tool selection is complete, and limitations/capabilities are identified, the next step is tool configuration. This step is absolutely essential for minimizing the amount of false-positives while maximizing code coverage. Consider that oftentimes a wide range of tools with overlapping capabilities is necessary for complete or “manageable” coverage, and confidence that they’re functioning as expected.
Identifying the right tools takes time and effort. Subject matter experts are usually necessary to help vet and verify issues identified and tune rulesets. Scanners may not support legacy code, and sometimes re-platforming is a less costly option. On the flip side, scanners can lag behind the latest available frameworks and languages. In both cases, code quality tools and custom test scripts are used to fill the gap in security capability, especially for teams doing continuous integration/continuous deployment.
Once the toolchain is operational, its output should obviously be consumed in some manner. For instance, output from security scanners embedded in test and staging environments will typically feed into defect tracking systems. Security issues caught earlier in the cycle may not. It’s important though to consider recording the source/stage where each vulnerability was identified, and by what tool. We will visit this question again when we talk about metrics.
Based on security thresholds defined in the SDLC security standards, security gates will be implemented to break the build/release process when issues of a certain severity or higher are identified by security tools. Additionally, because security testing may not be wired in series with the release workflow, a process to address issues caught by parallel scans and out-of-band penetration tests should be defined.
The end goal is to build a reliable and trustworthy toolchain with quality automated tools that gives sufficiently wide coverage across the application portfolio. To fill the remaining gaps, manual testing and other controls will be required.
In the next post I will cover secure SDLC lesson #3, knowledge management.