Skip to main content

Host OS Risks

March 23, 2020

Gaining Visibility into NIST SP 800-190, Part Seven

In part six of this series, we described how native AWS tools and third-party solutions can address container risks identified in section 3.4 of the NIST SP 800-190 Application Container Security Guide. In this post we’ll explore section 3.5 and 4.5 of the guide: Host OS risks and Host OS Countermeasures. We will again leverage our lab environment, which utilizes AWS EKS and Palo Alto Networks Prisma Cloud (formerly Twistlock).

The host OS includes the Linux operating system that the worker nodes and control plane components run on. From an exploitation perspective the goal of NIST’s guidance is to reduce the potential for a compromise of the cluster originating from a host OS vulnerability and also to reduce an attacker’s ability to pivot to other parts of the Kubernetes cluster if he or she manages to gain access.

Optiv CI Pipeline Lab

In the CI Pipeline lab, we’re employing Amazon Linux 2 as the host OS for Kubernetes components. This is one of the default choices for EKS and is well supported when it comes to tracking security updates. A complete list of current advisories that might impact AL2 can be found at AWS.

3.5.1 Large Attack Surface
Any remotely accessible service is considered part of the attack surface. When operating as part of a Kubernetes cluster, the host Linux OS only needs to support K8S services and services that may be used for administration of the cluster, such as SSH. Any other services that may have been enabled by the default operating system installation are a carried risk that isn’t required for cluster operation and should be disabled.

The Cloud Native Computing Foundation and CIS provide hardening guides that provide guidance on the minimally required services for Kubernetes and mechanisms to harden hosts within the cluster. Additionally, Linux distributions designed to operate as cluster hosts will have a reduced set of default services compared to a general-purpose Linux distribution.

3.5.1 Example Risks

  • A worker node may be configured with more running services than is required to function
  • Unnecessary local services may provide a privilege escalation path for an attacker

4.5.1 Countermeasures

  • Utilize a container specific OS such as CoreOS
  • Refer to the CIS Kubernetes Hardening guide for techniques to lock down a node


AWS
AWS Inspector includes a CIS Benchmark for Amazon Linux 2 in addition to benchmarks for CentOS, RHEL and Ubuntu. Inspector also provides a Network Reachability report, which attempts to identify services on a local host that would be accessible from the Internet, along with their listener status. While this report can be helpful, it’s always recommended to validate firewall and network filtering rules with a port scanning tool, such as Nmap, from a non-AWS location.

HostOS Image 1

HostOS Image 2

Prisma Cloud
Prisma uses the CIS benchmark for Distribution Independent Linux versus the targeted benchmarks that Inspector provides. This can be an advantage if you’re not using one of the distributions covered by Inspector.

HostOS Image 3

3.5.2 Shared Kernel
Kernel syscall filtering, such as Seccomp and LSM’s AppArmor, can provide more granular access control to kernel elements. But in this case NIST is addressing the risk from simply running non-container or mixed used kernel instances (e.g. a DNS server) as part of your Kubernetes cluster. While this should not be a concern outside of an internal proof of concept or test environment, it’s still a valid reminder to check for low-hanging fruit when it comes to risks like these.

3.5.2 Example Risks

  • Direct inter-object attack surface due to nature of container operations
  • High risk if mixed workloads are deployed on the same kernel instance

4.5.2 Countermeasures

  • Keep K8S instances dedicated to Kubernetes, do not mix workloads


AWS
Amazon does not directly address workload or server use analysis as part of its security profiling. While Inspector will alert you to vulnerabilities in all present components or available services, it’s still up to the user to perform the analysis.

Prisma Cloud
Prisma Cloud also doesn’t contain a specific feature targeted at this requirement, but the idea of dedicated systems is a core element in generally accepted security best practices.

3.5.3 Host
Linux distributions, even container-specific minimized distributions, will contain components or leverage kernels that will eventually require patching due to vulnerabilities or other factors. While minimizing the attack surface will reduce the number of components to track, you will still need to monitor and deploy security updates for your host OS environment.

3.5.3 Example Risks

  • Host OS components and kernel versions can contain vulnerabilities
  • Poor patch and configuration management practices can introduce vulnerabilities to the host OS

4.5.3 Countermeasures

  • Host OS components must be kept up to date and maintained, as with container components
  • Application dependencies should always reside within the container itself and not the host OS


AWS
Inspector will check for known CVE entry vulnerabilities that have an associated ALAS or Amazon Linux specific identifier. While CVE is a universal default for tagging published vulnerabilities, you may defer to ALAS in an AWS environment. In terms of which standard to use, the preferred choice is whichever route enables more rapid patch deployment, depending on your environment tooling.

HostOS Image 4

HostOS Image 5

Prisma Cloud
Prisma reviews the host OS for known vulnerabilities every 24 hours by default and can also be set to do so at a user-defined frequency. Vulnerability data for other infrastructure components is provided via the Intelligence Stream service. In the screenshot below, Prisma references the vulnerabilities by their ALAS identifier, which will also have an accompanying CVE.

HostOS Image 6

3.5.4 Improper User Access Rights
In most cases Linux administration tasks may be conducted via an orchestration layer, but in the cases where direct command shell access is required the process should be strictly controlled and audited. Ideally any direct interaction is logged and subsequently reviewed for unusual behavior that may be an indicator of compromise.

3.5.4 Example Risks

  • SSH access to the host OS can enable a variety of attacks due to compromised accounts
  • Allowing direct user access comes with the requirement to manage and monitor that access, increasing costs

4.5.4 Countermeasures

  • Whenever possible, manage Kubernetes components via an orchestration solution versus direct command shell access
  • When shell access is required, all interactions should be logged and reviewed


AWS
AWS has multiple native mechanisms for monitoring attempted SSH access as well as recording user activity once they’re connected. Here’s a basic example of using CloudWatch to monitor a log stream for failed SSH connections. This may be an indicator of an attack.

When it comes to direct SSH access to K8S cluster components, it’s recommended to leverage a bastion server in the control plane to consolidate monitoring and make it easier to spot a rogue connection. AWS provides configuration files and instructions for setting up a bastion host for this purpose.

Prisma Cloud
As part of the monitoring functionality, Prisma can log all SSH sessions, sudo or other commands in an interactive session, as well as any Docker commands that induce a state change. If required Docker read-only commands can be logged as well, along with redirection of any audit events to an external monitoring solution such as Splunk or SumoLogic.

3.5.5 Host OS File System Tampering
A container should not generally need to mount local file systems for a K8S worker host. This type of access can expose to host to potential file system tampering that could damage other containers or parts of the cluster. Additionally, sensitive data can be exposed to an attacker via read-only access even if a set of restricted permissions is in place. When containers do need to access storage it should be to a specific volume dedicated to this task, and not part of the host OS operating volume.

3.5.5 Example Risks

  • An attacker may discover sensitive information with access to the host file system
  • Disruption of the host OS file system can lead to disruption of other K8S components

4.5.5 Countermeasures

  • Restrict container access to the host local file system
  • Assign dedicated storage volumes to containers when needed, to ensure that storage use does not interfere with host OS operations


AWS
Amazon Inspector includes CIS benchmarks as rule templates and these contain checks for file system permission, which should be part of a security review. In the example below the Inspector report shows multiple failures for provided separate file system partitions that would be expected in a hardened system.

HostOS Image 7

Prisma Cloud
Prisma alerts by default when containers mount sensitive local host directories. The runtime models also determine where a container should write to the file system when required, and will also enforce those controls. In addition, there’s an option for custom runtime rules to black-/whitelist write operations to defined directories. Like Inspector, Prisma will leverage a CIS ruleset during a host audit that includes file system checks.

HostOS Image 8

The NIST SP 800-190 Application Container Security Guide does an outstanding job detailing the risks new cloud native container technology architectures pose to the organization, and we strongly encourage all security practitioners to review its contents.

If you still have questions, please drop us a line. We’ll be happy to help you get the answers you need.

Read more in Optiv’s Gaining Visibility into NIST SP 800-190 series. Here's a review of related posts:


    John Bock

By: John Bock

Senior Research Scientist | Optiv

See More

Related Blogs

February 20, 2020

Container Risks

Container technologies allow developers to assemble code and associated dependencies into a single package or container image.

See Details

January 28, 2020

Orchestrator Risks

The health and security of the orchestration technology cluster is critically important and should not be understated.

See Details

January 21, 2020

Registry Risks

Part 4 in the Gaining Visibility into NIST SP 800-190 series explores Registry Risks and Registry Countermeasures.

See Details

How Can We Help?

Let us know what you need, and we will have an Optiv professional contact you shortly.


Privacy Policy

Stay in the Know

For all the latest cybersecurity and Optiv news, subscribe to our blog and connect with us on Social.

Subscribe

Join our Email List

We take your privacy seriously and promise never to share your email with anyone.

Stay Connected

Find cybersecurity Events in your area.