Getting Started with Amazon EKS and Configuring Karpenter for Autoscaling
Container technologies are not a new concept. In fact, they have been around since the early days of Linux and were first introduced with Docker back in 2013! Kubernetes is an open-source software that enables you to deploy your containerized applications at scale while providing portability & extensibility. So, it can be scaled seamlessly without any downtime or issue on this end - but there's one big downside: Taking full advantage requires time spent setting up nodes for master/worker pools which isn't always ideal when trying to get things done fast nowadays.
Amazon EKS
Kubernetes has become easier with Amazon EKS, a managed service that takes care of all your Kubernetes needs. You don't need to stand up or maintain your control plane for this paradigm-shifting technology because it does everything from provisioning and upgrading to patching when necessary!
Benefits of Amazon EKS
- When managing your Kubernetes cluster, you no longer have to worry about maintaining the control plane. Amazon EKS takes care of this for you by running an automated infrastructure in AWS availability zones across multiple regions so that users can focus on application deployment instead!
- Kubernetes security is taken one step further with Amazon EKS, as the service sets up an encrypted communication channel between worker nodes and its Kubernetes endpoint.
- Amazon EKS is a managed service for your Kubernetes Environment that enables applications to be deployed and updated with great ease.
Amazon EKS workflow
If you want to set up your own Amazon EKS cluster, this guide will help you get started. Setting one up is quite simple and can be done via the AWS Management Console or AWS CLI.
- Create an Amazon EKS cluster in the AWS Management Console or AWS CLI.
- Next, Create a nodes group with nodes under the Amazon EKS cluster you created in the previous step.
- After your cluster and nodes are ready, you can configure the kubectl tool in your local machine to communicate with your AWS EKS cluster.
- Now you can deploy your application into the Amazon EKS cluster from your local device.
Best practice for Amazon Elastic Kubernetes Service (EKS)
Maintain AWS EKS Role
Create AWS EKS role under EKS service, and it can be used when we create an EKS cluster.
Maintain Roles for node Instances.
Create an IAM role with required permissions, and it can be used when new nodes are created.
AWS-Auth ConfigMap
Use AWS-Auth ConfigMap to manage the cluster access for users and IAM roles.
EKS Security Groups
Must ensure AWS EKS security groups are configured, and they are only allowing incoming traffic on TCP port 443.
EKS Cluster Logging
We must ensure that control plane logging is enabled for our clusters.
Avoid Publicly Accessible Cluster Endpoints
To prevent AWS EKS cluster endpoint access from being public and vulnerable to security risks, ensure that it is not exposed on the internet.
Setting up an EKS cluster in a virtual private cloud(VPC) is a secure way to prevent access from the public.
Kubernetes Autoscaling
Autoscaling is a fantastic way to ensure that your resources are always available, no matter the demand. When autoscale is enabled, it will automatically launch another pod/node without manual intervention if it detects higher than expected usage. This means less time wasted with scaling up/down.
The Kubernetes Autoscaling can be split into two different categories, Pod Level and Node Level.
Pod Level Autoscaling
Horizontal Pod Autoscaling allows you to scale out your application when traffic increases so that it doesn't become overwhelmed. This can be useful for those with a popular site or service but don't want their infrastructure packed with additional servers because they are only using part of its capacity! In simple terms, It will dynamically increase or decrease the number of pods based on the traffic.
Vertical Pod Autoscaling Scale up your deployment by scaling vertically within the cluster. It means reconciling pods' sizes ( CPU or memory targets) based on current usage and desired target, then making adjustments to match!
Node Level Autoscaling
Node Level autoscaling solves the issue of matching capacity to your needs by automatically increasing or decreasing numbers on nodes depending on node resource utilization.
Karpenter
When we set up autoscaler in EKS, we always get the same type of node(We set it up while creating a node group) even when we require a higher configuration node to run the new pod, we end up with multiple similar nodes.
Karpenter is an automated solution that creates and monitors servers in response to incoming pod/resource requests. The system has built-in intent recognition for resource selection and scheduling constraints based on what's needed by each group of instances at any given time. Karpenters' goal is to provide easy access to computing power without having you worry about managing individual node sizes.
To wrap up, I hope you now understand what Amazon EKS is and what are some best practices we need to follow to create a cluster. Keep an eye here for a hands-on blog about Karpenter, And check the Article about Kubernetes Event-Driven Auto Scaling(KEDA).