EDR and Blending In: How Attackers Avoid Getting Caught
EDR and Blending In: How Attackers Avoid Getting Caught
PART 2 OF A SERIES.
In the previous article, we discussed techniques attackers can use to bypass endpoint detection and response (EDR). However, circumventing EDR’s memory hooks isn’t the only hurdle attackers focus on to avoid detection. EDRs still forward a tremendous amount of information. This information, along with other network-based controls, can create detection events. For example, attackers can circumvent the hooks an EDR loaded into their malicious process but all events before the execution of the binary and the subsequential unhooking will still be logged. To an EDR, the binary “aw2r1941g.exE” running from a user’s TEMP folder may not trigger any EDR-based alerts, but when reviewed by an analyst, it can stand out as something to investigate. Even with EDR hooks removed, EDRs can still provide security teams this information, as that process only occurs when the application is executed.
At this stage attackers begin to focus on blending in, taking advantage of the noise or activity commonly recorded by sensors. Attackers often don’t get detected on non-malicious events but ones that stand out as suspicious. Attackers understand that their loader (i.e., a mechanism to deliver shellcode into an endpoint’s memory) may not be the reason an alert was generated. It could simply be how they downloaded it, how they executed it or even the process name itself standing out.
A great illustration is bitsadmin.exe, a well-known living-off-the-land binary (referred to as LOLBINS). These are default Windows-based applications, found in all Windows systems, that can be used in the download and execution of malicious payloads by attackers. Bitsadmin.exe is not necessarily a malicious tool and it’s often used by system administrators for legitimate reasons. So how do defenders catch this? It’s often detected not by EDR protections, but rather system event-logging mechanisms. By reviewing the event, detection can still occur based on:
- Is this normal use for that user?
- Is the downloaded file coming from a trusted known source?
- What type of file is being downloaded?
Figure 1: Detection Alert – Catching Bitsadmin Downloading a File Remotely
The process abstracted here is extremely hard to scale in medium or large organizations. As a result, companies rely on baselining and user behavior to spot abnormalities and help drill into suspicious activity, extending beyond user behavior into what the process looks like to help filter the amount of sensor data into something that’s understandable and manageable. Even with medium size organizations, the sheer amount of telemetry gathered from security products can be overwhelming.
This is where the term “indicators of compromise” (IoC) really comes into play. Indicators of compromise are artifacts that identify the presence of malicious activity on a system or network. These can act as breadcrumbs that identify an attacker’s actions. IoC examples include a process hollowing another process, numerous failed authentications from a single source or email enticing users to open an attached file. These breadcrumbs can help piece together and back trace an attacker’s activity, as most of the techniques or standard operating procedures to achieve an attacker’s desired goal are well documented. The indicators defenders use can help identify what an attacker has done or was trying to do.
Figure 2: Behavior Indicator Map 
Indicators of compromise have helped changed the landscape for defenders; however, attackers have adapted to this, knowing that if they can blend in far enough the few breadcrumbs, they leave will stand less chance of being detected (or will only be detected much later). We’ll refer to these as “indicators of abnormality.”
Indicators of abnormality are events or actions that don’t necessarily stand out as intruder behavior, but that have triggered some event (often a low severity) that when reviewed looks odd. This behavior isn’t always malicious, but it requires some form of review or investigation. Indicators of abnormality are often found in the “blind spots” of security teams' defense and monitoring controls and are often detected and reviewed too late, often through no fault of the defenders but owing to a systemic problem relating to the fundamental attacker vs. defender mentality. An attacker only needs to be successful once, whereas a defender needs to worry about all possible vectors. As a result, blue team attention will always be split, focusing on immediate threats while more anomalous events get handled later. These events include users logging into the network outside normal business hours, an Excel application spawning after a user logs in or even users egressing at a higher-than-normal rate over a short period of time.
Knowing this lets an attacker focus on removing any “clear compromise” indicators to focus on blending into “abnormal behavior,” reducing the chance of detection. This stage of attacker development often employs fewer technical techniques than the ones described in the previous blog. Instead, attackers focus on tactics that will fool a human analyst more by blending into the surroundings.
So, what are some of the technique’s attackers can employ?
Below you can see the verified “Microsoft Corporation” binary. Since it’s signed by a trusted authority like Microsoft, the file will likely draw less scrutiny. A code signing certificate is a digital certificate that allows companies a way to verify the identity of the publisher, ensuring they are a recognized and trusted authority. Code signing also ensures that the software has not been tampered with and is as it should be.
Figure 3: Code Signing Certificate Example
Many tools, such as Google Safe Browsing, Microsoft’s SmartScreen and even antivirus and EDR products, require software to be signed or they’re flagged as untrusted (and in some cases, execution is prevented entirely). If attackers can compromise a code signing certificate, they can sign virtually any malware, increasing the chances that security products will blindly trust it.
Code certificates are next to impossible to compromise, as companies invest heavily in protecting them, and ones that are compromised don’t stay that way for long. They can be purchased, but are quite expensive and often can be easily flagged. As a result, a lot of security products have begun to create their own whitelist of acceptable certificates.
An easier technique is to create a fake code-signing certificate. These can be quite effective, as a lot of endpoint security products don’t have the capabilities to fully vet and verify all applications’ certificates at run time (without holding the application up, which can hurt the business). Attackers know this, so all they need to do is fake enough values to blend in.
What does this require? At a high level, digital signature validation relies upon the following:
- Integrity validation — Does the hash of the file match the signed hash in the signature? If not, the integrity of the file has been compromised and it should not be trusted.
- Certificate chain validation — Was each certificate in the chain properly issued by its parent?
- Certificate validity check — If each certificate in the chain is not timestamped, is each certificate within its stated validity time frame? If the digital signature is timestamped, validate the timestamping certificate counter-signature chain.
- Revocation check — Are any of the certificates in the chain revoked or explicitly untrusted by an administrator?
- Root CA validation — Is the root certificate in the signer chain a trusted certificate?
Some of these attributes can be spoofed, although it’s difficult. The important ones are the root CA validation, certificate chain validation, integrity validation, and certificate validity check. Some of these values can be acquired and set fairly easily with a wide range of open-source tools and SSL based libraries. In fact, the open-SSL library itself can be used in the creation of certificates.
File attribute writing
Similar to metadata in an Office document, file attribute values can give investigators valuable information when being reviewed both in memory and on disk for further investigation. If these values stand out, it can help defenders catch attackers. In many cases having blank values can lead to further investigation and suspicion as legitimate binaries or DLLs typically have file attributes.
Figure 4: Malicious File’s Attributes
Because of this, attackers can modify the file attributes of a file to look like they are part of the Windows operating system. This is achieved by embedding resource files in the compilation process. Resource files can modify the compiler’s outputted attributes of an application, allowing an attacker to spoof legitimate Windows values and making it harder to determine if their malicious payload is real. One of the easiest ways to do this is by employing embedded resource files. Resource files, when compiled along with a payload, will modify the attribute portions of the compiled code.
.syso files are embedded resources that can be included in an application when it’s compiled into an application. They can dictate the attributions or functionality of a compiled application, and in this case, attackers can use them to spoof a legitimate Windows application’s attributes. Importantly, not all programming languages can utilize resource files natively, which is very advantageous for languages such as Go or C Sharp. Attackers can acquire the attributes of a file by pulling them from a valid Windows system using the “.VersionInfo” option in PowerShell’s' Get-Item module.
Figure 5: Notepad File’s Attributes Extracted
With a list of attribute values an attacker can create a resource file with these matching attribute titles and corresponding values.
Figure 6: Notepad File’s Attributes Extracted
Embedded resources can be any file type. For this article, we'll focus primarily on Golang. There are many great resources, but for now, we'll focus on using the Goversioninfo package, which handles the creation of .syso files for Windows file properties. With this package and the extracted list of file attributes for numerous trusted programs found on any system, we can programmatically make .syso files.
Figure 7: Sample list of File Attributes
This, combined with tools such as OpenSSL and osslsigncode, lets us create a library that not only contains .syso files, but also generates code signing certificates and signs DLLs or binary files to help blend into an environment. This library of techniques is fully integrated with the tactics, discussed in part 1 of this series, for creating a set of tools for unhooking and bypassing EDR products while remaining undetected.
How can you detect these techniques?
Identifying these attacker tactics isn’t simple because the spoofed values are taken from legitimate sources. Whitelisting applications based on hashes can work, but they also create problems. There are thousands of DLLs and binaries a company may need to operate, which can make normal blue team operations cumbersome as patches or new versions of products are released. If not handled properly this can be detrimental to the business.
Instead, proactive hunting techniques looking for indicators of anomalous behavior can create a detailed map of a network to help identify these types of techniques. Blue teams can review network connections, full file hashes and paths on disk, registry keys for common applications and DLLs. This will help baseline “normal” process activities on systems that can help analysts identify unusual processes or activities. This can help identify the key indicators of these techniques:
- The location from where files are running.
- Attackers often use folders such as AppData, ProgramData, Temp and other temporary storage locations to drop or place files on systems. System applications or installed applications typically are in C:\Windows and C:\Program files, and both locations need elevated contexts to drop files into. By identifying abnormal behavior through baselining applications, blue teams can identify anomalous and malicious applications masquerading as legitimate system ones originating from unusable locations.
- Processes that call out to the Internet.
- While many applications may legitimately call Internet-based resources, these locations should be highly baselined, as normal traffic to the destination should have been observed routinely. Filtering these out to look for new, abnormal destinations can help blue teams cut through the sheer amount of traffic and focus their attention by reviewing new/unknown sites.
These are just two major ways to help identify these hacker techniques; EDRs can be used to gather some of these pieces of information but should not be relied upon as the sole source of telemetry to identify intruder behavior.
We hope our EDR series proves useful for you and your organization. We will also be releasing a series of tools highlighting these techniques. Optiv’s code projects will be found on GitHub.
Here's a review of related posts on this series:
Copyright © 2021 Optiv Security Inc. All rights reserved.
No license, express or implied, to any intellectual property or other content is granted or intended hereby.
This blog is provided to you for information purposes only. While the information contained in this site has been obtained from sources believed to be reliable, Optiv disclaims all warranties as to the accuracy, completeness or adequacy of such information.
Links to third party sites are provided for your convenience and do not constitute an endorsement by Optiv. These sites may not have the same privacy, security or accessibility standards.
Complaints / questions should be directed to Legal@optiv.com