ATT&CK’s Initial Access Adversarial Tactic
October 05, 2018
By far the best way to prevent a security incident is to prevent the initial access from being obtained in the first place. This is why organizations spend large portions of their security and IT budget on key security technologies such as firewalls, antivirus and exploitation prevention software, application whitelisting, and vulnerability scanning tools. All of these devices and software work together to harden the infrastructure, in attempt to prevent intrusion.
In this post I will drill into ATT&CK’s Initial Access Adversarial Tactic, hitting on what I consider to be the most prevalent and impactful techniques that attackers can use to gain a foothold into an organization. This list is not comprehensive of techniques that attackers can use, and even the entire ATT&CK matrix may not reflect the entire attack surface. However, these are techniques I, and my peers, have been repeatedly using across multiple penetration tests, impacting many different organizations.
The world has changed significantly from when I first started exploring computer security in the late 90’s and early 2000’s. In those days it seemed like there were only a few organizations that had firewalls protecting their environments from would-be attackers. It was common to come across organizations running services that should not have been present on the Internet. It’s important to note that the Internet was developed with the intention of connecting and sharing; as a result, it later became clear that the Internet shared too much!
Firewalls are now the de facto first step in designing a network today, meaning organizations are presenting a much smaller attack surface. With this change, attackers are increasing scrutiny into publicly accessible Internet application exploitation, rather than attempting to abuse network protocols. These applications provide an Internet presence for organizations, and more often than not are a means that end-users can become consumers of goods and services, e.g. signing into an application to interact with, or submit, data. In today’s world, more and more software that used to be desktop software is being developed into web applications for ease of use and access.
As mentioned, exposing applications to the Internet is standard process for organizations in today’s inter-connected world. Because each organization is unique, it is not uncommon for organizations to hire programmers to develop custom software to provide desired services. The problem arises when these organizations fail to implement a software development life cycle (SDLC), or hire developers who have not been trained in secure coding. Even in the event that an organization follows best practices, such as SDLC, and developer training, custom application code is often focused on usability, and not security, providing attack vectors for malicious actors.
Additionally, off the shelf software that may already have the capabilities the organization requires, may be widely deployed across many organizations, presenting attackers with a more valuable attack vector. This gives attackers incentive into reviewing source code, reverse engineering or fuzzing applications in attempt to identify a vulnerability which could afford them access into multiple organizations.
If organizations wish to deploy publicly accessible applications to the Internet, steps should be taken in order to ensure the security of not only the data stored on the system, but also the users who visit the application, and other systems that the application may be able to communicate with.
Application Isolation is the process of limiting the access privileges the application has to the base operating system where the application is being hosted. Running the application as a non-administrative user could reduce the impact of a system which was exploited through a vulnerability.
Network Segmentation restricts the communication of the application with other systems in the environment. Generally, applications are hosted within a Demilitarized Zone (DMZ); a DMZ is a segmented network that has controlled access to the internal network and generally allows limited inbound traffic from the internet. By segmenting machines from each other, if the application is exploited the attack may not have a means of which to pivot and attempt to attack other systems. It is important to consider other systems also in the DMZ as a potential attack surface.
Secure Development Life Cycle is the process of testing and releasing code in a controlled method to ensure that security vulnerabilities are identified, remediated, and future releases of the application follow the same vetting process, to ensure it new vulnerabilities are not introduced to the application. Generally, code should be retested by source code analyzers, as well as a penetration tester after each major update.
Web Application Vulnerability Scanning allows organizations to assess the current security posture of the application, potentially identifying security vulnerabilities before a malicious actor. Scanning the application regularly can identify a newly introduced vulnerability much quicker, giving security staff the ability to remediate.
Communications have changed so much over the last 30 years. Previously, organizational phone lines, voice mail systems and fax machines were the primary method of communication. That was replaced with the global adoption of the Internet. Now, instead of scanning and faxing a message page by page, organizations can share entire libraries of information with each other with the simple click of a button. It’s easy to say that email has become the most convenient method of communication, both for inter and extra organizational communications.
Email brings a whole gambit of existing problems with it, and actually introduces more vulnerabilities than older technologies. While email users still suffer from the same old social engineering techniques used over the phone, users are providing a direct conduit for would be attackers to bypass organizational firewalls. Today, organizations simply cannot exist without email, making the wide spread deployment and potential access vector a high value technique for attackers. Attackers may attempt to get users to open a malicious link, or an attached file in order to gain access to technology or sensitive data such as user credentials. With this level of access attackers can move to other attack techniques such as T1078 – Valid Accounts in order to compromise organizational data.
It should be noted that even the most well deployed technology defenses against email phishing may not fully protect an organization. Regular Security Awareness Training is by far one of the most important things organizations can do to prevent an attacker from successfully spear phishing a user. Additionally, organizations should implement a variety of controls including Spam Filtering, Egress and Category Filtering, Antivirus and Anti-Exploitation Software, and System Hardening to reduce the risk in the event that a user does fall for a phishing attack.
With the growing connectivity of the world, it is not uncommon for an organization to have vendors that have access to their environment through a Virtual Private Network (VPN) or private network circuit. In this case an organization's attack surface boundary doesn’t just include their own systems, but all of the systems vendors or 3rd party partners, who are granted remote access to their network.
In some cases, organizations may have deployed all the correct attack surface hardening controls, making it very difficult to gain a foothold into the environment. However, through a security breach at a connected vendor, an attacker may be able to leverage access to the vendor's environment or data, and breach the organizational assets of another, company, bypassing security controls put in place. One prime current example of this is the ongoing attacks on the US Power Grid. Attackers were unable to directly breach electric utility companies, and instead attacked vendors who provide computer services to the electric utility company, resulting in unauthorized access to critical systems.
Connections to and from outside vendors should be reviewed in order to ensure that the connections are as secure as the Internet perimeter security controls implemented at the organization. Network Segmentation of vendor-controlled systems and Access Control on vendor network tunnels should be implemented. Organizations could monitor normal vendor user activity such as which systems are typically accessed and implement User Behavior Anomaly (UBA) or Network Level Anomaly (NLA) detections. Additionally, validating vendors follow the same Compliance Requirement as your organization does can decrease the risk to an organizations environment.
One of the most difficult types of attacks to detect are those attacks which blend in with normal user behavior. Often these attacks are leveraging valid credentials. There are a variety of ways on how credentials can be compromised in the first place including password reuse, DNS poisoning, email phishing, and poor password selection.
An attacker leveraging a valid account to access sensitive and confidential data may not generate any security alerts, as it may be observed as normal user behavior. Attackers often use this to their advantage in order to access company resources such as Email, VPN’s as well as access employee workstations. The reuse of valid passwords obtained through phishing or T1110 Brute Force has become a favorite for attackers due to its high level of success and value.
Organizations should first focus on providing Security Awareness Training to employees which covers topics such as password reuse, and how to construct a strong password, resistant to password brute force techniques. Organizations should also implement mitigations against the techniques such as T1003 Credential Dumping to ensure that credentials remain secret and secure. Furthermore, implementing Multi-Factor Authentication on critical systems is highly recommended to ensure that even in the event of credentials being obtained that the potential impact may be reduced. Similar to monitoring standard user behavior in T1199 – Trusted Relationships; leveraging User Behavior Anomaly detection can alert organizations when accounts are performing actions out of the ordinary.
After covering these four primary Initial Access Techniques, it has become apparent that a combination of technology and education is required to secure and harden environments. Programmers must have secure coding habits, and users need to have security awareness knowledge, then in the event that these educational controls fail, properly implemented and configured processes or technology could potentially prevent an attack from being successful. Or, at least afford the organization an opportunity to respond to an attack in a timely fashion. By implementing Initial Access controls, organizations are attempting to “keep the bad guys out.” However, one should always consider other tactics attackers may use in the event that these controls fails. This series will continue to cover each of the ATT&CK tactics to provide knowledge on dangers of each tactic and some of the most critical techniques.
Read more in Optiv’s ATT&CK series. Here's a review of related posts on this critical topic:
- ATT&CK Intro - September 2018
- ATT&CK Initial Access - October 2018
- ATT&CK Privilege Escalation - November 2018
- ATT&CK Discovery - March 2019
- ATT&CK Persistence - April 2019
- ATT&CK Credential Access - April 2019
- ATT&CK Execution - May 2019
- ATT&CK Defense Evasion - May 2019
- ATT&CK Lateral Movement Techniques - June 2019
- ATT&CK Exfiltration - July 2019
- ATT&CK Series: Command and Control - August, 2019
- ATT&CK Series: Collection Tactics – September, 2019
- ATT&CK Series: Impact – September, 2019