In our previous article, we discussed step-by-step instructions on how to create Amazon EKS Cluster using AWS CLI. In this article, we will look into how to add worker nodes to Amazon EKS Cluster using CLI. Let us get started.
Amazon EKS Worker Node – Overview
What is EKS Worker Node?
Amazon Elastic Kubernetes Service (Amazon EKS) worker nodes are the underlying compute resources that run your containerized applications within a Kubernetes cluster managed by EKS. Worker nodes are EC2 instances (virtual machines) that are part of a node group associated with your EKS cluster. These worker nodes are responsible for running your containerized workloads, such as Docker containers, and ensuring that they are highly available and scalable.
EC2 Instances:
Worker nodes are EC2 instances that run in your Amazon Web Services (AWS) account. You can choose the instance type, size, and configuration for these nodes based on the requirements of your applications.
Kubernetes Nodes:
In the context of Kubernetes, worker nodes are also referred to as “nodes” or “minions.” These nodes are part of the Kubernetes cluster and are responsible for executing the tasks assigned by the Kubernetes control plane, such as running containers and managing networking.
What is node group in AWS EKS?
Worker nodes are organized into node groups, which are logical groups of nodes with similar configurations. You can create and manage multiple node groups within an EKS cluster, each with its own instance type, scaling settings, and other parameters. This allows you to have different types of worker nodes for different workloads within the same cluster.
EKS Auto Scaling:
EKS worker nodes can be configured to auto-scale based on the workload demands. You can set minimum and maximum node counts for each node group, and EKS will automatically adjust the number of nodes to meet your application’s requirements.
Prerequisites to Add a Worker Node
- Existing EKS cluster
- Kubectl cli
- eksctl command line utility
- AWS CLI
Steps to Create EKS Cluster Worker Node Using AWS CLI
1. Here is the existing EKS cluster.
uxpro-$ ./kubectl cluster-info
Kubernetes control plane is running at https://000A54E6ED9884B975DCA6CBA8AB042D.sk1.us-east-2.eks.amazonaws.com
CoreDNS is running at https://000A54E6ED9884B975DCA6CBA8AB042D.sk1.us-east-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.
uxpro-$
2. If you do not have eksctl command line utility, install using brew on mac. For Linux and Windows, refer eksctl official documentation page.
uxpro-$ brew install weaveworks/tap/eksctl
Running `brew update –auto-update`…
==> Homebrew’s analytics have entirely moved to our InfluxDB instance in the EU.
We gather less data than before and have destroyed all Google Analytics data:
https://docs.brew.sh/Analytics
Please reconsider re-enabling analytics to help our volunteer maintainers with:
brew analytics on
Installing from the API is now the default behaviour!
You can save space and time by running:
brew untap homebrew/core
brew untap homebrew/cask
==> Downloading https://formulae.brew.sh/api/formula.jws.json
######################################################################100.0%
==> Downloading https://formulae.brew.sh/api/cask.jws.json
######################################################################100.0%
==> Auto-updated Homebrew!
Updated 2 taps (ngrok/ngrok and fluxcd/tap).
==> New Formulae
fluxcd/tap/flux@0.41
3. Create a YAML manifest like below to create a nodegroup.
uxpro-$ cat nodegroup.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: uabdreks1
region: us-east-2
vpc:
id: “vpc-axxxxx”
securityGroup: “sg-06ad35a69c1b8f717” # this is the ControlPlaneSecurityGroup
subnets:
public:
public1:
id: “subnet-703b403c”
public2:
id: “subnet-aff822c4”
public3:
id: “subnet-09ccdc73”
nodeGroups:
– name: BDRnodegroup
instanceType: t2.micro
minSize: 1
maxSize: 3
desiredCapacity: 2uxpro-$
4. Use eksctl command to create the required node group in programmatic way.
uxpro-$ eksctl create nodegroup -f nodegroup.yaml
2023-09-18 22:46:44 [ℹ] will use version 1.27 for new nodegroup(s) based on control plane version
2023-09-18 22:46:45 [!] no eksctl-managed CloudFormation stacks found for “uabdreks1”, will attempt to create nodegroup(s) on non eksctl-managed cluster
2023-09-18 22:46:51 [ℹ] nodegroup “BDRnodegroup” will use “ami-09d80224b8fe20ffb” [AmazonLinux2/1.27]
2023-09-18 22:46:53 [ℹ] 1 nodegroup (BDRnodegroup) was included (based on the include/exclude rules)
2023-09-18 22:46:53 [ℹ] will create a CloudFormation stack for each of 1 nodegroups in cluster “uabdreks1”
2023-09-18 22:46:53 [ℹ] 1 task: { 1 task: { 1 task: { create nodegroup “BDRnodegroup” } } }
2023-09-18 22:46:53 [ℹ] building nodegroup stack “eksctl-uabdreks1-nodegroup-BDRnodegroup”
2023-09-18 22:46:55 [ℹ] deploying stack “eksctl-uabdreks1-nodegroup-BDRnodegroup”
2023-09-18 22:46:55 [ℹ] waiting for CloudFormation stack “eksctl-uabdreks1-nodegroup-BDRnodegroup”
2023-09-18 22:47:26 [ℹ] waiting for CloudFormation stack “eksctl-uabdreks1-nodegroup-BDRnodegroup”
2023-09-18 22:48:17 [ℹ] waiting for CloudFormation stack “eksctl-uabdreks1-nodegroup-BDRnodegroup”
2023-09-18 22:49:56 [ℹ] waiting for CloudFormation stack “eksctl-uabdreks1-nodegroup-BDRnodegroup”
2023-09-18 22:50:43 [ℹ] waiting for CloudFormation stack “eksctl-uabdreks1-nodegroup-BDRnodegroup”
2023-09-18 22:50:43 [ℹ] no tasks
2023-09-18 22:50:44 [ℹ] adding identity “arn:aws:iam::476227053747:role/eksctl-uabdreks1-nodegroup-BDRnod-NodeInstanceRole-1HPBSYDDPB7R3” to auth ConfigMap
2023-09-18 22:50:45 [ℹ] nodegroup “BDRnodegroup” has 1 node(s)
2023-09-18 22:50:45 [ℹ] node “ip-172-31-10-3.us-east-2.compute.internal” is not ready
2023-09-18 22:50:45 [ℹ] waiting for at least 1 node(s) to become ready in “BDRnodegroup”
2023-09-18 22:51:17 [ℹ] nodegroup “BDRnodegroup” has 2 node(s)
2023-09-18 22:51:17 [ℹ] node “ip-172-31-10-3.us-east-2.compute.internal” is ready
2023-09-18 22:51:17 [ℹ] node “ip-172-31-29-43.us-east-2.compute.internal” is not ready
2023-09-18 22:51:17 [✔] created 1 nodegroup(s) in cluster “uabdreks1”
2023-09-18 22:51:17 [✔] created 0 managed nodegroup(s) in cluster “uabdreks1”
2023-09-18 22:51:19 [ℹ] checking security group configuration for all nodegroups
2023-09-18 22:51:19 [ℹ] all nodegroups have up-to-date cloudformation templatesuxpro-$
5. Check the nodegroup status using the following command.
uxpro-$ eksctl get nodegroup –cluster uabdreks1
CLUSTER NODEGROUP STATUS CREATED MIN SIZE MAX SIZE DESIRED CAPACITY INSTANCE TYPE IMAGE ID ASG NAME TYPE
uabdreks1 BDRnodegroup CREATE_COMPLETE 2023-09-18T17:16:55Z 1 3 2 t2.micro ami-09d80224b8fe20ffb eksctl-uabdreks1-nodegroup-BDRnodegroup-NodeGroup-10GH16XFN9OTE unmanageduxpro-$
6. Use Kubectl to get the nodes.
uxpro-$ ./kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-10-3.us-east-2.compute.internal Ready4m47s v1.27.4-eks-8ccc7ba
ip-172-31-29-43.us-east-2.compute.internal Ready4m43s v1.27.4-eks-8ccc7ba uxpro-$
7. you could also list using the label option to filter the using node group name like below.
uxpro-$ ./kubectl get nodes -l alpha.eksctl.io/nodegroup-name=BDRnodegroup
NAME STATUS ROLES AGE VERSION
ip-172-31-10-3.us-east-2.compute.internal Ready9m10s v1.27.4-eks-8ccc7ba
ip-172-31-29-43.us-east-2.compute.internal Ready9m6s v1.27.4-eks-8ccc7ba uxpro-$
Create a managed node group for your EKS cluster:
To avoid managing the EC2 nodes by yourself, AWS offers a managed worker node solution to use it. In this method, EKS takes care of managing the worker nodes, including tasks like node provisioning, scaling, and updates. You don’t need to worry about the underlying EC2 instances; EKS handles this for you.
To use this feature, you just need to update the manifest from “nodeGroups:” to “managedNodeGroups:”
Conclusion
In this article, we have added the EKS worker nodes using eksctl command line tool. Worker nodes are tightly integrated with the EKS control plane, which allows them to communicate and receive instructions from the Kubernetes master components. This integration ensures that your containerized applications are properly orchestrated and managed. Now we are set to deploy new containerized application on the EKS cluster.
Read more on AWS:
AWS for Beginners – Top 7 Commands for Managing Amazon S3 Buckets and Objects with AWS CLI – Part 75
AWS for Beginners: How to Create & Manage EC2 Instances using AWS CLI – Part 67
AWS for Beginners: How to Set Up AWS CLI and SDK on CentOS – Part 64
Follow our Twitter and Facebook feeds for new releases, updates, insightful posts and more.