Data Loss Prevention - The Process and Technology Overlap

By K.C. Yerrid ·

In Part 1 and Part 2 of this blog series, I discussed how placing an emphasis on training and documentation can assist in realizing Data Loss Prevention investments. In this installment, I will focus on the overlap between process and technology. 

Referring to the diagram below, the overlap between technology and process is the concept of automation. Unfortunately, many organizations fail to transform the DLP infrastructure from an autonomous system to an integrated part of a security intelligence platform. Clearly, most information security folks understand the power of automation to make their job duties easier; however, to effectively leverage the DLP investment, we need to look broader than the concept of DLP and realize larger applications within a security intelligence capacity. 

In many organizations, I witness that the DLP platform is an autonomous system that exists within its own administrative boundaries.  An analyst or administrator must log into the DLP application directly in order to mine information from the system.

The automation that occurs is normally not cutting-edge. Perhaps the DLP system will categorize the incident by line of business, severity or some other rational filter. If the DLP system is not at least doing this, the organization is really missing the mark in its investment. However, for the sake of this discussion, let’s assume that there is some basic decision making occurring when an incident is detected. 

Ideally, the categorization is based on some pre-established threshold that optimizes the available time of the staff, while simultaneously minimizing the risk to the organization. As in the case of business continuity planning efforts, it is important for the business to determine an acceptable level of loss for an incident.

The same process should apply to DLP incidents. If we look at the trends, companies that make the front page are losing thousands of records per incident. This is not to suggest that a single record (or 10 records for that matter) is not important. All I am suggesting is that there comes a point where the Law of Diminishing Returns comes into play, and it may not be cost effective to launch investigations on every single incident that comes across the DLP solution.

If you are struggling to determine the magic number, perhaps a good starting point is to perform some descriptive statistics on the existing incidents to determine mean, mode and median of matches per incident. 

Using the 80/20 rule, consider auto-closing incidents with less than the derived “magic number.” Those incidents are representative of incidents that are not financially worth pursuing directly as a course of initial triage. An alternative approach may be to extrapolate the cost of a data incident - $188 per record in 2012 - by the number of matches triggered in the DLP console. (Ponemon Institute, LLC, 2013) Whatever method you choose, stick with it. 

My point here is to debunk the expectation that your DLP investment can, and will, stop every incident that is triggered. Trust me—the staff will thank you if they do not have to run down every time someone in sales purchases a book on Amazon.com using a personal credit card.

Clearly, each organization is different, and the “magic number” today will need to be adjusted periodically. However, by setting an upper threshold on the number of incidents that must require manual intervention, one can operate with fewer staff, and significant incidents are much less likely to fall between the cracks.

Once one considers internal automation, perhaps the organization should consider external automation opportunities. 

Is the organization sending incident data to a SIEM? Would it be valuable to know to be able to correlate the incident to other events? For instance, only the SIEM would be in a position to tell when an account password was changed and correlate that information to exfiltration activity under the compromised account. It is the difference between spotting a malicious insider from a malicious outsider. 

Take it a step further. When it comes to insider threat detection, the DLP solution is only one control in a defense-in-depth strategy. Statistics bear out that a malicious insider will exhibit certain behavioral patterns before acting out. Therefore, if an organization takes the time to integrate performance reviews, personnel actions or other essential Human Resources functions with both SIEM and DLP, all of the components are in place to detect and monitor insider threat activities. 

Of course, this can be easily extended to physical security events as well, whereby anomaly detection activities from badge swipes during off-hours, large data transfers near negative HR events and printer usage can be triangulated.

At this point, you may be wondering where I am going with all of these recommendations. Some may suggest that the recommendations are great in theory but not very realistic in practice due to the resource costs associated. In my experience, organizations sometimes have fallen into a pattern of complacency where they are content with throwing people at the problem, creating an unsustainable cycle of pain. In automation activities, we seek to bust out of that frame of thinking by reducing the signal to noise associated with DLP.

Ultimately, the end goal is to have a solid defense-in-depth strategy that uses myriad countermeasures to slow or minimize the amount of damage the organization is afflicted with as a result of a data loss incident. As my colleague Dr. Eric Cole suggests, “Prevention is ideal, but detection is a must.” (Keynote, 2010) Automation extends Data Loss Prevention to Data Loss Prevention and Detection

As we look forward to the final installment of this blog series, consider the value that can be gained by integrating your DLP investments with other components of your security fabric. What other opportunities exist to provide greater visibility to the security operations? Got an idea that I have not covered? Give me your feedback. Let’s build a better mousetrap together.


Works Cited

Cole, E. (Performer). (2010). Keynote. Charlotte, NC.

Ponemon Institute, LLC. (2013). 2013 Cost of Data Breach Study: Global Analysis. Traverse City: Ponemon Institute.