Skip to main content

The Future of Forensics

January 06, 2015

I have wanted to write this article for a couple years now. I have been talking about enterprise forensics problems and limitations for about 10 years or so, and the nature of the problems become increasingly relevant with each passing day.

The more I look at the “big data” problem, including the proliferation of terabyte and larger hard drives at the endpoint, massive centralized data repositories and the logistics of collecting and processing the data centrally - the more it becomes evident that the future of computer forensics must be based on four primary principles:

  • Deduplication of data prior to searching, collection or other invasive functions
  • Distributed processing at each endpoint with smart agents that are capable of decoding, decrypting, interpreting and searching all data types
  • Cloud-based agent instruction and correlation of processed data indexes
  • Centralized viewing of data and results from all distributed data sources

If you take a look at the ever expanding scope of data being created and disseminated, the challenge of using traditional, centralized computer forensic methods are unscalable. I have been doing enterprise forensics since 2003, a time when slow WAN speeds and 40GB hard drives were too much for the technology, and the user experience wasn’t very good.

Fast forward to today. It is true that WAN speeds have improved (which theoretically should help), but the size of hard drives and data repositories has grown even faster. In the end, the user experience of trying to view data over a WAN isn’t any better now than it was then. In fact, it is probably worse. You just can’t reasonably collect, process and view the quantity of data that exists in modern environments using the current centralized approach employed by most enterprise forensic tools. It times out and fails.

To combat this, there is a workaround (really more of a hack). You can deploy a forensic examiner (physical or virtual) to each physical location so you have LAN-based speeds to do your investigation. Or, you need to do some kind of remote/phone home collection to bring all of the data back to the central location and search it locally. But, this is not a reasonable approach for every situation, and it is almost always clunky and clumsy because it requires multiple attempts to collect data. This seldom results in a comprehensive data set as it generally precludes the ability to obtain data from small offices and remote users. 

Considering the increased capabilities of modern endpoints, it is now reasonable to push more of the workload down to the local machine, sending only the metadata associated with the search results back to a centralized location. In this way, it could be centrally indexed, queried and viewed almost instantaneously from the central console in the cloud.

At this point in the conversation, the logical question is: If native data that exists only on the endpoint needs to be viewed, how do you get to it? The answer: It could be requested by the interface in the cloud and provided by the endpoints. The endpoints would need to check in at rapid intervals for instructions if their data was being reviewed, and the endpoints could serve up requested data to be viewed by the central console. Most live preview functionality works similar to this now, with the exception that console reaches out the endpoint, instead of the other way around. Either way will work if the endpoints check in frequently enough.

Considering my background in forensic software design, there is a temptation to turn this post into a series of feature requests and architecture diagrams, but I probably shouldn’t go any deeper than this. I am happy to work with any forensic software company and whiteboard out an architecture diagram and set of features for a system such as this. I would love to see someone make it and to be a part of it. It would make my life in enterprise forensics a lot easier.

Related Blogs

July 06, 2017

Indicators of Compromise (IOCs) are Not Intelligence

When discussing the topic of cyber threat intelligence, I frequently hear questions about Indicators of Compromise (IOCs). IOCs are not intelligence b...

See Details

June 28, 2017

Petya / Petna / NotPetya Ransomware Recommendations from the Trenches

Here we go again. Not long ago I updated a blog post containing actionable recommendations to protect your environment from ransomware threats, includ...

See Details

October 31, 2013

CryptoLocker - The Latest in a Long Line of Ransomware

Since early September 2013, a new version of ransomware has been spreading around the globe using email attachments, embedded internet links and/or bo...

See Details

How Can We Help?

Let us know what you need, and we will have an Optiv professional contact you shortly.


Privacy Policy

Related Insights

July 07, 2017

Enterprise Incident Management Program Primer

The Optiv EIM framework covers all the components necessary to respond to and manage an incident.

See Details

April 09, 2014

The Evolution of Malware and Security Compromise

Malware is evolving and changing at an unprecedented rate. The fact is that 95% of all organizations have been compromised, without their knowledge, i...

See Details

October 06, 2017

Managed Security Services - Service Guide

Learn about our flexible and scalable services to improve your security capabilities.

See Details

Stay in the Know

For all the latest cybersecurity and Optiv news, subscribe to our blog and connect with us on Social.

Subscribe

Join our Email List

We take your privacy seriously and promise never to share your email with anyone.

Stay Connected

Find cyber security Events in your area.