Announcements

Policy Based Governance in Trusted Container Platform

NIST NCCoE

Virtualization and containerization significantly benefit efficiency, adaptability, and scalability of workloads. However, workloads may be hosted on an environment sharing a pool of physical platforms in a data center or multi-tenant cloud. There are security concerns on whether workloads are being run on platforms that are trustworthy, in terms of the integrity of the platform, its locality/metadata, and its ability to establish itself in a root of trust. Other concerns include the confidentiality of workload image and its key protection. This is important especially when dealing with regulated or sensitive workloads and data.

We believe that the foundation of any data center or edge computing security strategy should be securing the platform on which data and workloads will be executed and accessed. The physical platform represents the first layer for any layered security approach and provides the initial protections to help ensure that higher-layer security controls can be trusted. The technologies used to secure the platform and the means on how to use them will be presented.

This blog presents an innovative technology solution with policy-based governance to automate the process mitigating these security concerns (as illustrated in the figure below) for containers. Policy can be defined to 1) ensure workloads in the cluster are only run on trusted physical platforms 2) report untrusted platforms to a management alert dashboard; 3) encrypt workload images before being uploaded; 4) only decrypt and launch images which are run on trusted platforms with the appropriate metadata requirements (i.e. location, asset tag, etc.).

In this post, we will start by describing the details of the building blocks of a trusted container platform, and the technologies used, including Intel® Security Libraries for Data Center (ISecL-DC), Red Hat OpenShift, IBM Cloud Pak for MultiCloud Management, and IBM Encrypted OCI Container Images. Then we will show the architecture diagram including all the components for demo, followed by the demonstration of each feature. Finally we demonstrate the implementation in support of the policy described previously. This is a project among NIST, IBM, Red Hat, and Intel. The objective of the blog is to share some early development to demonstrate a prototype leveraging open-source software components and commercial of-the-shelf technology. This blog is the first instance in a series to share the research conducted by the team.

Elements of a Trusted Container Platform

A high-assurance enabled cloud should be able to secure and govern the use of container workloads to a higher precision - to be able to tie a workload to a physical entity, and subsequently, high order logical entities. In order to do so, we should leverage several technologies as follows:

Hardware Root of Trust

  • Hardware-based security techniques can help mitigate these threats by establishing and maintaining platform trust—an assurance in the integrity of the underlying platform configuration, including hardware, firmware, and software. By providing this assurance, security administrators can gain a level of visibility and control over where access to sensitive workloads and data is permitted. Platform security technologies that establish platform trust can provide notification or even self-correction of detected integrity failures.

Workload placement/orchestration

  • Platform information and verified firmware/configuration measurements retained within an attestation service can be used for policy enforcement in a variety of use cases. One example is orchestration scheduling. Cloud orchestrators, such as Kubernetes and OpenStack, provide the ability to label server nodes in their database with key value attributes. The attestation service can publish trust and informational attributes to orchestrator databases for use in workload scheduling decisions. In addition, the orchestration system should provide the visibility into the attestation state of the machines.

Workload encryption

  • Consumers who place their workloads in the cloud or the edge are typically forced to accept that their workloads are secured by their service providers without insight or knowledge as to what security mechanisms are in place. The ability for end users to encrypt their workload images can provide at-rest cryptographic isolation to help protect consumer data and intellectual property. When the runtime node service receives the launch request, it can detect that the image is encrypted and make a request to retrieve the decryption key. This request can be passed through an attestation service so that an internal trust evaluation for the platform can be performed. The key request is forwarded to the key broker with proof that the platform has been attested. The key broker can then verify the attested platform report and release the key back to the Cloud Service Provider and node runtime services. At that time the node runtime can decrypt the image and proceed with the normal workload orchestration. The disk encryption kernel subsystem can provide at-rest encryption for the workload on the platform.

Technologies

Before jumping into details of architecture and processes that establish the Trusted Container Platform, let us go through the list of technologies we are using and see how they map onto the required building blocks we've established.

Hardware Root of Trust Technologies

Intel Security Libraries for Data Center (ISecL-DC) consists a set of building blocks that discover, attest, and utilize hardware security features (e.g., Intel® Trusted Execution Technology (Intel® TXT), Intel® Boot Guard, Intel® Software Guard Extensions (Intel® SGX), to help enable critical cloud security and confidential computing use-cases. The building blocks provide a consistent set of APIs for easy integration with cloud management software, and security monitoring and enforcement tools. ISecL-DC provides a middleware that integrates platform security features with cloud orchestration and services. 

Workload placement/orchestration

Red Hat OpenShift is a CNCF certified Kubernetes distribution, providing an open hybrid cloud platform with enterprise-class resiliency. Red Hat OpenShift offers automated installation, upgrades, and lifecycle management throughout the container stack—the operating system, Kubernetes and cluster services, and applications—on any cloud. The platform offers developer and operational centric tools the ability to enable application development, deployment, scaling, and lifecycle maintenance for long-term innovation. Red Hat OpenShift is focused on security at every level of the container stack and throughout the application lifecycle. Red Hat OpenShift runs on Red Hat Enterprise Linux (RHEL) CoreOS, a container-optimized operating system. IBM cloud Pak's components, which exclusively run on OpenShift Clusters, leverage the security capabilities built into OpenShift, and jump start the journey to build a cloud-native business application.

Multicloud Management technology in IBM Cloud Pak for Multicloud Management / Red Hat Advanced Cluster Management for Kubernetes enhances the security lifecycle of your hybrid cloud environments. Enterprises must meet internal standards for software engineering, secure engineering, resiliency, security, and regulatory compliance for workloads hosted on hybrid clouds. Teams that provide enterprise cloud platforms, as well as application business units that run their business applications on like cloud platforms, can use this governance capability to gain visibility and drive remediation for various security and configuration aspects to help meet such enterprise standards.

IBM Cloud Pak for Multicloud Management and Red Hat Advanced Cluster Management are two technology offerings that provide similar Multicloud Management capabilities that can be interchangeable in this prototype. For our blog series, we are using IBM Cloud Pak for Multicloud Management.

Workload encryption

Encrypted Container Images is a capability introduced by IBM Research in container build tools and runtimes such as skopeo, buildah, containerd, and OpenShift which allow the encryption and decryption of container images. This is based on the OCI container standard and ensures confidentiality of container images as soon as they leave the pipeline, until they are run on a trusted compute node with access to the decryption keys. In an event of a registry compromise, this ensures the confidentiality of container images stay intact, and we can cryptographically associate trust with images.

Architecture

Let's see how we can put these technologies together to form an example trusted container platform. Here is an overview of the architecture, followed by a description of how the technologies map onto the components.

Overview

  • ISecL-DC Server (left) contains the key broker service, attestation service, and utilities for attesting the hardware root of trust and host measurements
  • There are a managed (middle) and management (right) cluster. 
  • The managed cluster is the cluster in which our trusted workloads will run. We can imagine a multiplicity of managed clusters governed by the Multi Cloud Manager (MCM).
  • The management cluster contains the control plane for Multi Cloud Manager (MCM), and DevOps related tooling.
  • For our setup, we have two OpenShift clusters: management and managed clusters. The management cluster is enabled on VMW with VSAN on three bare metal servers. The managed cluster is enabled on KVM, but with a hybrid install by placing worker nodes on both KVM VMGuest and bare-metal server. 

Attestation of bare metal nodes

  • Each baremetal node has a Trusted Platform Module (TPM) and runs a bootloader up to OS stack which has hardware root of trust for measurement using Intel TXT.
  • These measurements are collected by the ISecL-DC trust agent on the nodes.
  • The node trust status is verified via the ISecL-DC attestation service and then updated by the attestation hub into each OpenShift cluster.

Key Management

  • The Key Broker (within ISecL-DC services group in diagram) manages keys for all clusters and helps ensure that they can only be accessed by attested trusted nodes. It is designed to require trust attestation in order to release keys. 

Encryption/Decryption

  • OpenShift contains a pipeline that will encrypt container images with the help of the skopeo tool.
  • The counterpart to decrypting the image is cri-o which the OpenShift container runtime automatically deployed on each worker node.
  • The encryption/decryption tools skopeo and cri-o have a plugin to talk to ISecL-DC services in order to perform secure key exchange of encryption/decryption keys.

 

Orchestration / Multi Cloud Management and Policy Enforcement

  • Each managed cluster contains an MCM Klusterlet, which ensures that each managed cluster adheres to the policy in place.
  • Policies are created in the MCM Hub, and these policies are propagated to each managed cluster, where they are enforced.
  • We have 3 policies put in place:
  • Ensure all nodes in cluster(s) are trusted,
  • Ensure user container workloads are encrypted, and
  • Ensure that a DevOps pipeline is in place to enforce building of applications with an encryption policy.

Processes of a Trusted Container Platform

Now that we've established the technologies that form the building blocks, let us see how we have used the technologies above to achieve the desired policy we put forward at the start of the blog post. 

Ensure nodes in Container Platform are trusted

To help ensure a secure and trusted runtime environment, we want to assert that our clusters only run trusted nodes (rooted in Hardware Root of Trust, i.e., Intel TXT and TPM).  Node platform attestation provides the capability in a cluster to know which node is in trusted or untrusted state so the cluster can leverage such information to schedule workloads on a trusted node if required and provide mitigation if a node becomes untrusted.  ISecL-DC platform attestation and cluster integration includes three components: a trust agent deployed on each node, a verification service, and an integration hub. Using hardware (Intel TXT or Intel BootGuard) as the core root of trust for measurement to establish the chain of measurement for each component, the trust agent can respond to the verification service for TPM quote to report the host manifest. The verification service verifies the measurement with trusted measured database and asserts if the node is trusted or not. The integration can retrieve the node trust status including node asset tag and push them to the orchestration system as labels.

This helps ensure the compute nodes that are in the cluster are trusted and asset tags (e.g., geo-location) are known, and if, for any reason, the integrity of a node is compromised, that it would be shut down and removed from the secure environment. Therefore, all nodes that are running in the cluster should be attested and trusted at all times. The trust status of the nodes is defined as a policy and is enforced through the use of MCM. If a node is detected to lose its trust status from a failure to attest, an event is created in MCM and the node is removed from the cluster.

 

 

Ensure container workloads are encrypted and secure

The process of ensuring the container workloads are secure is twofold. We first need to ensure there is a secure process for container workloads. Namely, we need to ensure that there is a DevSecOps process in which we validate the security of our container workload. We use the secure pipelines from OpenShift to act as a gate to ensure proper vulnerability management, and add-on to the process by using skopeo/buildah encryption capabilities to encrypt the workloads. During the encryption process, the Key Broker (an ISecL-DC component), is responsible for the creation of the node according to the policy put forward. In this case, an example policy can be that a container workload should only be run on a trusted and attested node, or something more specific, like requiring a certain asset-tag of "region:EU", which is bound to the hardware TPM as part of ISecL-DC capabilities. Once this is done the encrypted image is then hosted in a container registry. 

To tie things together, we need to ensure that our labeled workloads only run in the secure environment. When the OpenShift container runtime downloads the container image from the registry, it detects that it is encrypted and reaches out to the ISecL-DC Key Broker to obtain the decryption key. At this point, ISecL attests the node and ensures that it is trusted and has the required asset-tags as configured in the policy in the DevSecOps pipeline. Only if this is true, the key will be released, and the container image will be decrypted and run. This acts as an additional layer of protection if the workload is accidentally or maliciously run on an untrusted node.

Conclusion

In this blog post, we've laid out what are some of the requirements of a Trusted Container Platform to support regulated or sensitive workloads. We've highlighted some technologies that we've used to construct such a prototype platform and shown how they can be used in concert to deliver such a capability. In the next blog post, we will detail the architecture as well as steps to set up a Trusted Container Platform like we have shown in this post.


References

Advancing container image security with encrypted container images

https://developer.ibm.com/technologies/containers/articles/advancing-image-security-encrypted-container-images/

Intel® Security Libraries for Data Center (ISecL-DC)

https://01.org/intel-secl

RedHat OpenShift

https://www.openshift.com/

Hardware-Enabled Security for Server Platforms: Enabling a Layered Approach to Platform Security for Cloud and Edge Computing Use Cases

https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.04282020-draft.pdf

IBM Cloud Pak for Multicloud Management

https://www.ibm.com/cloud/cloud-pak-for-management

RedHat Advanced Cluster Management

https://www.redhat.com/en/technologies/management/advanced-cluster-management

 

Authors

IBM: Brandon Lum, Harmeet Singh 

Intel: Haidong Xia, Tim Knoll

NIST: Michael Bartock 

Redhat: Yu Cao