Kubernetes: Cluster setup on AWS using Kubeadm

Gvsridhar
4 min readDec 24, 2020

Having own K8s cluster helped me practice more for CAKD exam preparation. This article helps to quickly setup multi node Kubernets cluster on AWS EC2 instances using Kubeadm. Creating Kubernetes cluster using kubeadm gives better insight on several low level components.

The steps described in this article can be used to spin K8s cluster on any Linux server/VMs as well.

AWS Infrastructure setup:

I am not covering EC2 instance setup in this article.

  1. 3 AWS Amazon Linux2 EC2 instances
  2. Allow all communication between EC2 instances
  3. Passwordless SSH between EC2 instances

Important: Do not change the hostnames set by AWS

Kubernetes cluster set up

WARNING: This setup is only for learning and cant be used for production workloads
STEP1: Install Docker and Kubernets components
Run following commands as root on all 3 EC2 instances:

As root run the below commands#Disable firewall
systemctl stop firewalld
systemctl disable firewalld
#Disable swap
swapoff -a
#Disable SELinux
sed -i ‘s/enforcing/disabled/g’ /etc/selinux/config
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1
EOF
sysctl net.bridge.bridge-nf-call-iptables=1
sysctl net.ipv4.ip_forward=1
sysctl --system
echo "1" > /proc/sys/net/ipv4/ip_forwardreboot#Kubernetes repocat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF
yum update -y
yum install -y docker kubeadm kubectl kubelet --disableexcludes=kubernetes
#Start an enable docker and kubelet
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet

Note: The Kubelet that is deployed next to the Kubeadm package will crashloop until you start kubeadm init. Check the status of the Kubelet after kubeadm init finishes

STEP 2: Initialize the cluster using below command on master node.

Save the output somewhere as we need it to add worker nodes to the cluster.

STEP 3: To start using your cluster, you need to run the following as a regular user:

As ec2-user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

STEP 4: Install Pod network plugin

We are using Calico for this set up.
kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml

At this point, the node status will change to Ready in the output of
kubectl get nodes

STEP 5: Join nodes to cluster

On worker nodes run the below command

Note: The above command is the output of Step 2

STEP 6: Check the node status
Once worker nodes are added with master node check the node status on k8s master node.
kubectl get nodes

STEP 7: Setup Metrics server

In order to check node and pod metrics like cpu and memory usage, we need metrics server which is not configured as part of above setup.

wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

In components.yaml file, edit the metrics-server deployment section to disable certificate check (line in bold added to args)

129     spec:
130 containers:
131 - args:
132 - --cert-dir=/tmp
133 - --secure-port=4443
134 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
135 - --kubelet-use-node-status-port
136 - --kubelet-insecure-tls

kubectl create -f components.yaml
Now, you can get node/pod metrics

STEP 8: Check pod status

Cluster cleanup

Refer https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

On Master node
kubectl drain <worker-node-hostname> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
On all nodes
rm -rf /etc/cni/net.d
rm -rf /etc/cni /etc/kubernetes /var/lib/dockershim /var/lib/etcd /var/lib/kubelet /var/run/kubernetes ~/.kube/*
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -t raw -F && iptables -X
systemctl restart docker
Cleanup the controlplane
kubeadm reset

Give it a try and respond here if any issues. Happy to assist!!

--

--