Amazon EKS Authentication and Authorization Process
Take a look at the step-by-step overview of how EKS interacts with Kubernetes to secure your containers.
Join the DZone community and get the full member experience.
Join For FreeContainers are one of the most important concepts in cloud computing. In fact, they have completely reshaped the way that many of us think about and approach virtualization. Containers behave much like virtual machines (VMs), but they are much more flexible and lightweight than a full-blown VM. Because they are so lightweight and flexible, containers have enabled us to take entirely new approaches to application architecture.
The purpose of Kubernetes is to provide a platform that can automate the deployment and management of applications that utilize containers at scale. It is primarily used with Google Kubernetes Engine but there are other supporting platforms available. For a platform to be considered a Kubernetes implementation, it should respect the Cloud Native Computing Foundation (CNCF) K8s specifications. Amazon EKS is one of the latest container orchestration systems on the market to achieve this on AWS. Being open source, Kubernetes is very versatile and there are few restrictions on where and how it can be used. Released in 2018, Amazon EKS helps developers launch and manage the master nodes through the control plane of a Kube cluster.
Kubernetes is at its core an HTTP REST API, the endpoint of this being known as the API Server which runs on Kubernetes cluster master nodes. Requests from both outside and inside clusters happen through API calls to the API Server, as well as for communication to all cluster components. Access to this API must, therefore, be secured by client authentication.
Kubernetes Authentication/Authorization Overview
Kubernetes supports several authentication modules that can be used by the API server. The available authentication methods are described here. These include, but aren’t limited to:
- X509 client certificates
- Service account tokens
- OpenID Connect tokens
- Webhook token authentication
- Authenticating proxy, etc.
Multiple authentication modules can be specified. In that case, each one is tried in sequence until one of them succeeds.
Generally speaking, when the API server receives a request, it passes the request to the authenticator module. If it can authenticate the request, it maps it to a “subject” (user, group or service account). Once the request is authenticated as coming from a specific identity, that request has to be authorized.
A request must include the username of the requester, the action, and the object affected by the action (among other attributes). All the request attributes are evaluated in accordance with all the set authorization policies before the Kubernetes authorization module allows or denies that request.
Kubernetes supports multiple authorization modules. RBAC (Role-Based Access Control) is the one enabled by default in most K8s implementations though.
By using the RBAC API, we define rules that represent a set of permissions (which are purely additive; there are no deny rules as permissions are denied by default). These rules include the actions (verbs) to permit, such as get, list, create, update, delete, etc.; the resources these actions apply to (like ConfigMaps, Pods, Secrets, etc,); and the API group containing those resources. We typically name this set of rules as "a Role" and there are two definable types:
-
Role
: If the permissions are defined within a namespace. -
ClusterRole
: If they are cluster-scoped.
A Role binding grants the permissions defined in a role to a list of subjects (users, groups, or service accounts). These permissions can be granted within a namespace with a RoleBinding object or cluster-wide with a ClusterRoleBinding.
AWS EKS Authentication/Authorization Overview
Amazon EKS uses one specific authentication method, an implementation of a webhook token authentication to authenticate Kube API requests. This webhook service is implemented by an open source tool called AWS IAM Authenticator, which has both client and server sides.
In short, the client sends a token (which includes the AWS IAM identity—user or role—making the API call) which is verified on the server-side by the webhook service.
Server-Side EKS Authentication
Authentication from the webhook service is based on the AWS IAM identity. It first verifies whether the IAM identity is a valid one within the AWS IAM service, then, the webhook service queries a ConfigMap called aws-auth to check if the IAM identity corresponds to a valid user in the cluster. Which means in this configMap, we have to add the IAM identities we want to grant access to the cluster—mapping these identities with K8s subjects (users or groups).
Once the identity has been authenticated, the authorization in EKS is done with RBAC in the standard Kubernetes way. We’ll go into greater depth about RBAC on Kube in the next article in this sequence.
The image below summarizes this process:
(Weibel, 2019)
When we create an EKS cluster, the iam-auth ConfigMap is not automatically created. We are responsible for downloading, updating, and deploying this object in the cluster. We can find it here.
Initially, it looks like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn:
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
This initial setup maps a specific role ARN (Amazon Resource Name)—the one attached to the worker nodes—to a cluster user and groups. These will, by default, have predefined permissions that allow these subjects to perform specific K8s API calls, allowing the EKS worker nodes to join the cluster this way.
This will take effect once we deploy the ConfigMap, enabling the AWS IAM Authenticator webhook service to validate this IAM identity (IAM Role) against the AWS IAM service and define the cluster user which this identity is mapped to for the authorization step.
We should update this configMap to add additional cluster users following the configuration format specificationand these AWS guidelines:
mapUsers: |
- userarn: arn:aws:iam:::user/admin
username: admin
groups:
- system:masters
- userarn: arn:aws:iam:::user/john
username: john
groups:
- developers
In the example above, system:masters
is a pre-defined group which has the cluster-admin role attached. The RBAC authorizer will then allow full access (admin rights) to the cluster to users belonging to that group.
The IAM identity that launched the EKS cluster is not listed in this ConfigMap but is automatically authenticated by the AWS IAM Authenticator webhook at cluster creation time, and mapped to the system:masters group. This is the reason why after creating the EKS cluster, that identity is the only one allowed to access the cluster without doing any configurations until the configMap is updated and deployed.
Client-Side EKS Authentication
As mentioned above, for the AWS IAM Authenticator webhook service to validate an identity, it requires an IAM identity to be sent from the client side in a bearer token.
The central piece of the client-side authentication process is the Kubernetes client library, which wraps HTTP requests (Kubernetes API calls) into functions that can be called from code, allowing programmatic access to Kubernetes. The official Kubernetes client library is written in Go, but the integral community maintains many other libraries written in different programming languages.
The client library gets the cluster information from the kubeconfig file—typically located in ~/.kube/config by default. It’s a JSON-formatted file that basically contains three main sections:
- Clusters: For every cluster listed in this section for which we need access to, we specify the API server endpoint and the cluster CA certificate (used to validate the identity of the API server).
- Users: This section defines the credentials (e.g. private key and certificate, user/password, etc) for authenticating against the API server. For EKS clusters, this section must be in a very specific format that implements a credentials plugin feature described below.
- Contexts: In this section, we specify a cluster we want to access, user (credentials) to access it, and, eventually, the namespace inside the cluster. We named each context so we can easily reference these group of settings from the command line and make them our default.
In EKS, we can easily get the kubeconfig file for interacting with a specific EKS cluster by running the following command (provided we have the proper IAM permissions):
$ aws eks update-kubeconfig --name ClusterName [flags]
Credentials Plugin Feature
As mentioned above, the user section in the kubeconfig file must have a specific format for interacting with EKS clusters:
users:
- name:
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- token
- -i
-
#- "-r"
#- ""
#env:
# - name: AWS_PROFILE
# value: ""
The key property here is exec
. It allows us to specify an external command (aws-iam-authenticator
for EKS) which will generate and return an identity (in a token form) to use while authenticating against the Kubernetes API Server.
This property (exec
) is implemented by a feature of the Go client library called the Credentials plugin. It was first introduced in Kubernetes v1.10 (in alpha) and it’s in beta since v1.11.
If using another client library, it is important to make sure that the library implements this feature. This is because the library is responsible for reading the kubeconfig file and it must understand the exec
property. The client library executes the external command (aws-iam-authenticator) and reads its output (which is printed in a specific JSON format) that includes the bearer token containing the IAM identity that has to be passed to the API server when making a request.
That IAM identity to be sent to the API server is the first one found in the AWS credential chain that is configured on the host. The AWS credential chain is a sequence of locations to be checked by AWS CLI and AWS SDKs while looking for credentials (access key) to sign AWS requests. It is further explained here. The AWS credentials are checked in the following order:
- Environment variables
- Shared credentials file
- The IAM role if the App is running on EC2
The following picture summarizes the client-side authentication process:
(Weibel, 2019)
Summary
In conclusion, for interacting with an EKS cluster we need:
- An IAM identity with proper permissions to make AWS EKS API calls. This identity has to be configured via AWS access keys if we are working on a local environment (it typically will be our own AWS user) or—if we are running an App that has to interact with EKS—that identity will be a role assumed by an EC2 instance.
- The kubeconfig file, containing the information needed to interact with an EKS cluster.
- The aws-iam-authenticator binary installed—to be executed by the K8s client library to get the AWS IAM identity and pass it in a token form to the webhook authenticator service (the server side of aws-iam-authenticator).
- The kubectl binary installed if working from a local environment or the K8s client library imported as a dependency in the application if making programmatic access to Kubernetes.
- The AWS IAM identity (user or role) added to the aws-auth configMap and mapped to a cluster user/group.
- Proper Role or ClusterRole (if not using pre-existing roles like cluster-admin) and RoleBinding (or ClusterRoleBinding) objects defined and deployed for allowing the cluster user to make specific K8s API calls.
Image References:
Weibel, D. (2019). Kubernetes Client Authentication on Amazon EKS – ITNEXT. Retrieved from https://itnext.io/how-does-client-authentication-work-on-amazon-eks-c4f2b90d943b
This post was originally published here.
Published at DZone with permission of Juan Ignacio Giro. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments