Create EKS Cluster & Node Groups
Bài đăng này đã không được cập nhật trong 2 năm
Step-00: Introduction
- Understand about EKS Core Objects
- Control Plane
- Worker Nodes & Node Groups
- Fargate Profiles
- VPC
- Create EKS Cluster
- Associate EKS Cluster to IAM OIDC Provider
- Create EKS Node Groups
- Verify Cluster, Node Groups, EC2 Instances, IAM Policies and Node Groups
Step-01: Create EKS Cluster using eksctl
- It will take 15 to 20 minutes to create the Cluster Control Plane
# Create Cluster
eksctl create cluster --name=eksdemo1 \
--region=us-east-1 \
--zones=us-east-1a,us-east-1b \
--without-nodegroup
# Get List of clusters
eksctl get cluster
Step-02: Create & Associate IAM OIDC Provider for our EKS Cluster
- To enable and use AWS IAM roles for Kubernetes service accounts on our EKS cluster, we must create & associate OIDC identity provider.
- To do so using
eksctl
we can use the below command. - Use latest eksctl version (as on today the latest version is
0.21.0
)
# Template
eksctl utils associate-iam-oidc-provider \
--region region-code \
--cluster <cluter-name> \
--approve
# Replace with region & cluster name
eksctl utils associate-iam-oidc-provider \
--region us-east-1 \
--cluster eksdemo1 \
--approve
Step-03: Create EC2 Keypair
- Create a new EC2 Keypair with name as
kube-demo
- This keypair we will use it when creating the EKS NodeGroup.
- This will help us to login to the EKS Worker Nodes using Terminal.
Step-04: Create Node Group with additional Add-Ons in Public Subnets
- These add-ons will create the respective IAM policies for us automatically within our Node Group role.
# Create Public Node Group
eksctl create nodegroup --cluster=eksdemo1 \
--region=us-east-1 \
--name=eksdemo1-ng-public1 \
--node-type=t3.medium \
--nodes=2 \
--nodes-min=2 \
--nodes-max=4 \
--node-volume-size=20 \
--ssh-access \
--ssh-public-key=kube-demo \
--managed \
--asg-access \
--external-dns-access \
--full-ecr-access \
--appmesh-access \
--alb-ingress-access
Step-05: Verify Cluster & Nodes
Verify NodeGroup subnets to confirm EC2 Instances are in Public Subnet
- Verify the node group subnet to ensure it created in public subnets
- Go to Services -> EKS -> eksdemo -> eksdemo1-ng1-public
- Click on Associated subnet in Details tab
- Click on Route Table Tab.
- We should see that internet route via Internet Gateway (0.0.0.0/0 -> igw-xxxxxxxx)
Verify Cluster, NodeGroup in EKS Management Console
- Go to Services -> Elastic Kubernetes Service -> eksdemo1
List Worker Nodes
# List EKS clusters
eksctl get cluster
# List NodeGroups in a cluster
eksctl get nodegroup --cluster=<clusterName>
# List Nodes in current kubernetes cluster
kubectl get nodes -o wide
# Our kubectl context should be automatically changed to new cluster
kubectl config view --minify
Verify Worker Node IAM Role and list of Policies
- Go to Services -> EC2 -> Worker Nodes
- Click on IAM Role associated to EC2 Worker Nodes
Verify Security Group Associated to Worker Nodes
- Go to Services -> EC2 -> Worker Nodes
- Click on Security Group associated to EC2 Instance which contains
remote
in the name.
Verify CloudFormation Stacks
- Verify Control Plane Stack & Events
- Verify NodeGroup Stack & Events
Login to Worker Node using Keypai kube-demo
- Login to worker node
# For MAC or Linux or Windows10
ssh -i kube-demo.pem ec2-user@<Public-IP-of-Worker-Node>
# For Windows 7
Use putty
Step-06: Update Worker Nodes Security Group to allow all traffic
- We need to allow
All Traffic
on worker node security group
Additional References
- https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html
- https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html
Delete EKS Cluster & Node Groups
Step-01: Delete Node Group
- We can delete a nodegroup separately using below
eksctl delete nodegroup
# List EKS Clusters
eksctl get clusters
# Capture Node Group name
eksctl get nodegroup --cluster=<clusterName>
eksctl get nodegroup --cluster=eksdemo1
# Delete Node Group
eksctl delete nodegroup --cluster=<clusterName> --name=<nodegroupName>
eksctl delete nodegroup --cluster=eksdemo1 --name=eksdemo1-ng-public1
Step-02: Delete Cluster
- We can delete cluster using
eksctl delete cluster
# Delete Cluster
eksctl delete cluster <clusterName>
eksctl delete cluster eksdemo1
Important Notes
Note-1: Rollback any Security Group Changes
- When we create a EKS cluster using
eksctl
it creates the worker node security group with only port 22 access. - When we progress through the course, we will be creating many NodePort Services to access and test our applications via browser.
- During this process, we need to add an additional rule to this automatically created security group, allowing access to our applications we have deployed.
- So the point we need to understand here is when we are deleting the cluster using
eksctl
, its core components should be in same state which means roll back the change we have done to security group before deleting the cluster. - In this way, cluster will get deleted without any issues, else we might have issues and we need to refer cloudformation events and manually delete few things. In short, we need to go to many places for deletions.
Note-2: Rollback any EC2 Worker Node Instance Role - Policy changes
- When we are doing
EBS Storage Section with EBS CSI Driver
we will add a custom policy to worker node IAM role. - When you are deleting the cluster, first roll back that change and delete it.
- This way we don't face any issues during cluster deletion.
All rights reserved