ATT&CK Series: Defense Evasion
May 14, 2019
Access Token Manipulation, Masquerading, BITS Jobs
Many breaches reported today share a common theme: the attackers gained access to systems long before the incident was discovered. Forensic evidence often tells a tale of how the attacker took the “low and slow” approach, using native operating system networking tools and evading detection by working behind the scenes to cover their tracks and eliminate the indicators of compromise that would alert the company’s security team that an intrusion has occurred.
In this post, we will look at several techniques described in ATT&CK’s Defense Evasion Tactic while focusing on ways attackers seek to hide while moving within a network and exfiltrating sensitive information.
Every day, people interact with different tools to complete tasks related to their job. When a person is assigned a task, they assume a level of ownership for that job until it is completedOwnership is associated with access tokens in Windows environments, which are objects that describe the identity and privileges of the user account interacting with a particular process running on the system.
Windows users can modify access tokens to make it appear as if the running process belongs to someone else. This effectively changes the security context (or ownership) of the new token. For example, an administrator might log into a system as a standard user but run tools with administrative privileges using built-in operating system commands such as “runas.”
An adversary can take advantage of access tokens to perform malicious actions on behalf of another user and evade detection. Windows is installed with built-in Application Interface (API) tools that allow users to copy access tokens from existing processes, often called “token stealing.”
There are several methods that an attacker could use to manipulate access tokens on a Windows system.
The attacker who has compromised the system through the exploitation of a remote credentialed access or similar method executes commands on the local host to manipulate the primary access token created by the Windows security subsystem, effectively impersonating, or creating a duplicate, of an existing token. This is the most common way attackers gain initial control over a running process on a system as the duplicate token would assume the security context of the user that was logged onto the machine at the time. This method is most useful when the user account is a local administrator on the affected system. An often seen practice in businesses today is to reuse the same credentials for local administrator accounts on workstations, which eases the burden on IT staff when troubleshooting software problems or occurs as the result of golden image host deployment.
Alternatively, an attacker might also elect to create a new process, which is then paired with a new access token. This token would assume the security context of an existing user account, with all of the associated rights and privileges. This method is particularly useful when the attacker is looking for a way to establish persistence on a system and the original process that was abused has a short lifespan, such as a Windows service that runs software updates and closes when it is complete. The attacker would simply need to review the processes running on the system and choose a more stable method owned by the same logged in user, such as explorer.exe, and create another process under that name.
Microsoft has tightly woven the use of access tokens into the Windows security subsystem, and as of this writing, there is not a way to disable them. The good news is that for any of the above attack methods to be successful, the attacker must have local administrative level access. This can be controlled by enforcing privilege restrictions on user accounts to the minimum level of access required to do their jobs.
Additionally, there are Group Policy settings in Windows that can be modified to limit permissions and ensure that local users and groups cannot arbitrarily create tokens.
Lastly, IT security staff should monitor command-line activity in Windows systems for suspicious behavior, specifically focusing on the use of tools such as “runas.” If an administrator account is being used at an odd time of day to launch several copies of notepad.exe or another seemingly innocuous tool, this might be an indication of compromise.
A masquerading attack uses a fake identity or stolen user account credentials to gain unauthorized access to sensitive information. Systems that do not have a well-protected authorization process could potentially open up an entire network to compromise. This attack can be launched from somewhere within the organization or from the outside while connected to a public network.
One method adversaries use to avoid detection is placing an executable file in a trusted directory with the name of a legitimate program. When the operating system looks for the program, it executes the malicious file instead. The resulting action could range from establishing a remote command and control session on the system or exfiltration of data.
Windows systems are vulnerable to malicious files being executed from non-standard directories if the attacker uses a well-known filename such as explorer.exe to launch their attacks. Linux systems are similarly vulnerable, however, abuse of benign programs happens after they are executed. For example, files stored in the /bin directory are typically considered trusted, such as rsyncd.
A common strategy to thwart masquerade attacks is to create security rules that avoid exclusions based on a file name or location. Additional file and folder permissions should be assigned to protect sensitive directories where system files are stored. Signed Windows binary certificates can help to establish nonrepudiation and prevent unsigned versions from being executed on the system.
A second strategy is for IT administrators to monitor networked systems for files with known names in uncommon locations and consider using whitelisting tools for known legitimate files or implement software restriction policies that prevent files from executing within a certain directory.
One of the less commonly known file transfer utilities found in modern versions of Windows is the Windows Background Intelligent Transfer Service (BITS). BITS jobs typically run in the background, using idle bandwidth to transfer data for software updaters and messaging apps without interfering with other processes running on the system. BITS jobs can be managed using PowerShell or the BITSAdmin command-line utility.
As BITS jobs are executed in the background, they provide the perfect cover for an attacker to download and execute malicious code on a networked system. Additionally, BITS jobs do not leave a large footprint on the system as task-related information is self-contained within the BITS application database. Host-based firewalls also permit BITS traffic by default. This provides the perfect environment for covert data exfiltration as well since BITS allows for file uploads in addition to its other features.
Unfortunately, mitigating the abuse of the BITS service is not as simple as disabling BITS functionality on the host system as legitimate applications rely on this service to download software patches and other updates.
A better solution would be to control access to the BITS interface to prevent malware from hitting the system in the first place. IT administrators should review host firewall rules and other network security controls to allow only legitimate BITS traffic. In addition, Windows records BITS activity in its event logs, which tie specific BITSAdmin commands to the user who executed them.
There are multiple ways that an attacker can hide while attempting to access a network and move laterally within it. Windows systems often unintentionally cause additional trouble as they helpfully provide a flood of log file information that could take days to parse and correlate malicious activity. In addition, firewalls and other network security controls may allow traffic to flow that an opportunistic attacker might take advantage of. The often complicated solution to these problems is knowing where to focus efforts to reduce the attack surface available to a potential intruder. Fortunately, mitigation guidance and best practices are available from sources such as Microsoft’s TechNet and the MITRE ATT&CK website. This series will continue covering various ATT&CK techniques and tactics used today, to help provide guidance to readers on both the risks to networks and available mitigation strategies.
Read more in our ATT&CK series. Here's a review of related posts on this important topic: