🏷️ Create and manage Kubernetes clusters using kubeadm

🏷️ Create and manage Kubernetes clusters using kubeadm#

Create k8s cluster#

  1. Initilize the first control plane.

 sudo kubeadm version -o short
v1.33.1

kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
networking:
  podSubnet: "10.244.0.0/24"
kubernetesVersion: "v1.33.1"
controlPlaneEndpoint: "kube-guisam:6443"

Important

Do not forget to create a DNS record pointing on control-plane IP address on all cluster nodes.

 awk 'END{print}' /etc/hosts
192.168.94.73 kube-guisam
 sudo kubeadm init \
  --config kubeadm-config.yaml \
  --upload-certs \
  --node-name=cp-01 \
  | tee kubeadm_init.out
  1. Get kubeconfig and check cluster

 grep -A3 "mkdir" kubeadm_init.out
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 kubectl cluster-info

Note

 echo 'alias k="kubectl"' >> .zshrc
ᐅ echo 'source <(kubectl completion zsh)' >> .zshrc
ᐅ source !$
  1. Install CNI Cilium

Check and install with helm.

 helm repo add cilium https://helm.cilium.io/
ᐅ helm template cilium cilium/cilium \
--version 1.18.4 --set hubble.relay.enabled=true \
-n kube-system
ᐅ helm upgrade --install cilium cilium/cilium \
--version 1.18.4 --set hubble.relay.enabled=true \
-n kube-system

Or use cilium command.

 cilium install --version 1.18.4
ᐅ cilium status
ᐅ cilium hubble enable
  1. Check kubeadm configuration

 sudo kubeadm config print init-defaults

Add a node#

On the new worker node

 awk 'END{print}' /etc/hosts
192.168.94.73 kube-guisam

Create a token on cp-01

 sudo kubeadm token create --print-join-command
ᐅ sudo kubeadm token list

Join with the output on worker node.

 sudo kubeadm join kube-guisam:6443 --token qx1q5g.602u9umwlk0nzj6x \
        --discovery-token-ca-cert-hash sha256:dc2e085b243729484598ed8073627e468d34453207053d752adcc84b83595c3d \
      --node-name=kube-worker-01

Remove a node#

On a control-plane node.

 kb get no
ᐅ kb drain kube-worker-01 --ignore-daemonsets --delete-emptydir-data

On 2removed node.

 kubeadm reset

On a control-plane node.

 kb delete node kube-worker-01

Create worker label#

 kb label nodes kube-worker-01 node-role.kubernetes.io/worker= kb get no
NAME             STATUS   ROLES           AGE   VERSION
cp-01       Ready    control-plane   25h   v1.34.1
kube-worker-01   Ready    worker          25h   v1.34.1
ᐅ kb get node -l node-role.kubernetes.io/worker -o name
node/kube-worker-01
kb get node -l node-role.kubernetes.io/control-plane -o name
node/cp-01
 kb get all -l component -n kube-system
NAME                                     READY   STATUS    RESTARTS        AGE
pod/etcd-cp-01                      1/1     Running   7 (6h50m ago)   7d3h
pod/kube-apiserver-cp-01            1/1     Running   7 (6h50m ago)   7d3h
pod/kube-controller-manager-cp-01   1/1     Running   6 (6h50m ago)   7d3h
pod/kube-scheduler-cp-01            1/1     Running   7 (6h50m ago)   7d3h
ᐅ tree -L 1 --noreport /etc/kubernetes/manifests
/etc/kubernetes/manifests
├── etcd.yaml
├── kube-apiserver.yaml
├── kube-controller-manager.yaml
└── kube-scheduler.yaml

ciliumloadbalancerippools#

cat <<EOF >> ippools.yaml
apiVersion: "cilium.io/v2"
kind: CiliumLoadBalancerIPPool
metadata:
  name: "blue-pool"
spec:
  blocks:
  - start: "172.16.20.100"
    stop: "172.16.20.200"
EOF

ᐅ k apply -f ippools.yaml