Access control is a critical part of securing your Kubernetes and CI/CD tooling. In this article, I’ll explain the
Kubernetes is the world’s leading container management platform because of its comprehensive API and developer-friendly features. It lets you create scalable and reliable applications that run both on-premises and in public clouds. It makes it possible to deploy and manage hundreds of instances across a data center or cloud environment.
In a Kubernetes environment, application development and deployment processes require more autonomy. That's why continuous integration (CI) and continuous deployment (CD) have adapted to the cloud-native world, making it possible to build, test, and release applications with minimal human intervention.
CI/CD tools that make up your pipeline can pull the latest changes from a source code repository, and replace the manual steps of compiling, testing, validating, and deploying to a Kubernetes cluster. This requires integrating with a container registry, a configuration manager (typically Helm), and several cluster environments (used for dev/test/production).
It is important to set up access control rules that control all access to the CI/CD pipeline. It should be easy and immediately clear who has access, when, and how. Record, monitor, and manage access to all pipeline components and resources, whether role-based, time-based, or task-based. This can prevent
Perform regular audits to discover duplicate system or service accounts, or accounts belonging to former employees that have not been revoked. Make sure there is strong authentication for all users, with regular password rotation. Machine identity and authentication are also important to secure non-human access to containers and Kubernetes clusters.
Amazon Web Services (AWS) provides a managed Kubernetes service called Amazon Elastic Kubernetes Service (Amazon EKS). AWS EKS aims to make it easy for organizations to run Kubernetes on the AWS cloud and on-premises.
Compatibility
Kubernetes is an open-source platform that enables organizations to automate containerized applications’ deployment, management, and scaling. Since AWS EKS is a certified Kubernetes-conformant, applications already on upstream Kubernetes are compatible with EKS.
Automation
EKS can automatically manage the scalability and availability of the Kubernetes control plane responsible for managing application availability, scheduling containers, storing cluster data, and performing other tasks.
Cloud services
EKS enables organizations to run Kubernetes applications on cloud services like AWS Fargate and Amazon Elastic Compute Cloud (Amazon EC2). It ensures organizations can leverage the performance, reliability, availability, and scalability of the AWS infrastructure and utilize integrations with AWS security and networking services, including:
Amazon EKS employs IAM to establish authentication for Kubernetes clusters while relying on native Kubernetes RBAC for authorization. It uses IAM only to authenticate valid IAM entities, and the native Kubernetes RBAC system manages all permissions for interacting with your EKS cluster’s Kubernetes API. The picture below demonstrates this relationship:
Image Source:
Here is how this works:
During the creation of an Amazon EKS cluster, the IAM user or role that created it automatically gets system: masters permissions. Such permissions grant unrestricted access to Kubernetes API's codebase. The user or role gets these permissions in the cluster's RBAC configuration, accessible via the Amazon EKS control plane. However, it isn't present in any visible configuration.
The instructions and code are based on the official
To verify if you can grant an IAM user or role access to an Amazon EKS cluster:
Run the following command to see which credentials kubectl uses to access the cluster:
cat <path-to-kubeconfig>
Replace <path-to-kubeconfig>
with the path to the kubeconfig
file in case the default path isn't used.
To add the required mappings to the AWS-auth ConfigMap:
eksctl get iamidentitymapping --cluster demo-cluster --region=demo-region-code
**
eksctl create iamidentitymapping \
--cluster demo-cluster \
--region=demo-region-code \
--arn arn:aws:iam::demo-account-id:role/demo-role \
--group demo-access-group \
--no-duplicate-arns
Replace demo-access-group with that specified in the Kubernetes role binding or cluster role binding.
eksctl get iamidentitymapping --cluster demo-cluster --region=demo-region-code
To apply the modified aws-auth ConfigMap to the cluster:
kubectl describe configmap -n kube-system aws-auth
If the command returns an Error from server (NotFound): configmaps "AWS-auth" not found, move on with the following steps.
curl -o aws-auth-cm.yaml https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/aws-auth-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
—rolearn: <ARN of demo instance role>
username: system:node:{{EC2PrivateDNSName}}
groups:
—system:bootstrappers
—system:nodes
Replace <ARN of demo instance role>
with the Amazon Resource name of the IAM role associated with the nodes. You can find this information in AWS CloudFormation's stack outputs. Save the file afterward and make sure not to modify any other parts of the file.
kubectl apply -f aws-auth-cm.yaml
kubectl get nodes --watch
Wait for the nodes to reach the Ready status.
In this article, I explained the importance of setting up robust authentication for your EKS clusters and showed how to achieve it with EKS and Amazon IAM. The primary steps are:
I hope this will be useful as you level up your EKS security strategy.