CEO Geniusee Software
In January 2019, Kubernetes, the world's most popular container orchestration system, discovered the first major security vulnerability that hit the project’s ecosystem. Vulnerability CVE-2018-1002105 allows attackers to compromise clusters via the Kubernetes API server, which allows malicious code to be executed to install malware, etc.
Earlier that year, the incorrect configuration of the Kubernetes control panel led to the installation of cryptocurrency mining software on Tesla resources. Then the attackers took advantage of the fact that one of the Kubernetes panels was not password protected, which allowed them to access one of the pods with an account to access the larger Tesla infrastructure in AWS.
Organizations that speed up the implementation of containers and their orchestration also need to take mandatory steps to protect such a critical part of their infrastructure. Below are nine of Kubernetes best security practices based on customer data. Follow them to better protect your infrastructure.
There are not only bug fixes in each quarterly release [of Kubernetes] but also new security features. To take advantage of them, we recommend working with the latest stable version.
Updates and support may be more difficult than the new features offered in releases, so plan your updates at least once a quarter. Significantly simplify updates can use the providers of managed Kubernetes-solutions.
Use RBAC (Role-Based Access Control) to control who can access the Kubernetes API and what rights they have. Usually, RBAC is enabled by default in Kubernetes version 1.6 and later (or later for some providers), but if you have been updated since then and did not change the configuration, you should double-check your settings.
However, enabling RBAC is not enough - it still needs to be used effectively. In the general case, rights to the entire cluster (cluster-wide) should be avoided, giving preference to rights in certain namespaces. Avoid giving someone cluster administrator privileges even for debugging - it is much safer to grant rights only when necessary and from time to time.
If the application requires access to the Kubernetes API, create separate service accounts. And give them the minimum set of rights required for each use case. This approach is much better than giving too much privilege to the default account in the namespace.
Creating separate namespaces is important as the first level of component isolation. It is much easier to adjust security settings — for example, network policies — when different types of workloads are deployed in separate namespaces.
A good practice to limit the potential consequences of compromise is to run workloads with sensitive data on a dedicated set of machines. This approach reduces the risk of a less secure application accessing the application with sensitive data running in the same container executable environment or on the same host. For example, a kubelet of a compromised node usually has access to the contents of secrets only if they are mounted on pods that are scheduled to be executed on the same node. If important secrets can be found on multiple cluster nodes, the attacker will have more opportunities to get them.
Separation can be done using node pools (in the cloud or for on-premises), as well as Kubernetes controlling mechanisms, such as namespaces, taints, tolerations, and others.
Sensitive metadata - for example, kubelet administrative credentials, can be stolen or used with malicious intent to escalate privileges in a cluster. For example, a recent find within Shopify’s bug bounty showed in detail how a user could exceed authority by receiving metadata from a cloud provider using specially generated data for one of the microservices.
The GKE metadata concealment function changes the mechanism for deploying the cluster in such a way that avoids such a problem. And we recommend using it until a permanent solution is implemented.
Network Policies - allow you to control access to the network in and out of containerized applications. To use them, you must have a network provider with support for such a resource. For managed Kubernetes solution providers such as Google Kubernetes Engine (GKE), support will need to be enabled.
Once everything is ready, start with simple default network policies - for example, blocking (by default) traffic from other namespaces.
Pod Security Policy sets the default values used to start workloads in the cluster. Consider defining a policy and enabling the Pod Security Policy admission controller: the instructions for these steps vary depending on the cloud provider or deployment model used.
In the beginning, you might want to disable the NET_RAW capability in containers to protect yourself from certain types of spoofing attacks.
To improve host security, you can follow these steps:
Make sure that audit logs are enabled and that you are monitoring for the occurrence of unusual or unwanted API calls in them, especially in the context of any authorization failures - such entries will have a message with the “Forbidden” status. Authorization failures can mean that an attacker is trying to take advantage of the credentials obtained.
Managed solution providers (including GKE) provide access to this data in their interfaces and can help you set up notifications in case of authorization failures.
Looking to the future
Follow these guidelines for a more secure Kubernetes cluster. Remember that even after the cluster is configured securely, you need to ensure security in other aspects of the configuration and operation of containers. To improve the security of the technology stack, study the tools that provide a central system for managing deployed containers, constantly monitoring and protecting containers and cloud-native applications. :)