๐ท๏ธ Configure Pod admission and scheduling (limits, node affinity, etc.)#
Resource requests and limits#
แ
kb create deployment toto --image=nginx --dry-run=client -o yaml > toto.yaml
แ
vi toto.yaml
แ
diff -u toto.yaml.before toto.yaml
--- toto.yaml.before 2025-10-07 14:13:25.102734727 +0200
+++ toto.yaml 2025-10-07 14:10:17.541875699 +0200
@@ -18,5 +18,11 @@
containers:
- image: nginx
name: nginx
- resources: {}
+ resources:
+ limits:
+ cpu: "0.5"
+ memory: "256Mi"
+ requests:
+ cpu: "0.25"
+ memory: "64Mi"
status: {}
LimitRange#
Create namespace
แ
kb create ns low
Create limitRange on ns
แ
kb create -f low-limit-range.yaml -n low
apiVersion: v1
kind: LimitRange
metadata:
name: low-limit-range
spec:
limits:
- default:
cpu: "1"
memory: "500Mi"
defaultRequest:
cpu: "0.5"
memory: "100Mi"
type: Container
Labels Annotations#
Labels and Annotations
Labels and Annotations are both key-value pairs used to attach metadata to objects like pods, services, and deployments, but they serve distinct purposes.
Labels are used to identify and select resources based on specific criteria. They are essential for grouping, filtering, and operating on objects. For example, labels are used by Kubernetes components such as services to route traffic to the correct pods by matching labels defined in selectors.
Labelsare indexed in the etcd database, which allows for efficient querying and searching using tools like kubectl.They are designed for identifying information and are constrained in format; the key name must be 63 characters or fewer, and the key can optionally include a prefix that is a DNS subdomain ending with a slash.
Annotations are used to store non-identifying, arbitrary metadata that is not intended for selection or filtering. They can hold structured or unstructured data, including complex information like JSON strings, and are not indexed in etcd, making them non-queryable.
Annotationsare typically used by external tools, libraries, or operators to store additional context such as configuration details, timestamps, owner information, or tool-specific data.For instance, annotations can store the last update time, a git commit hash, or contact details for the responsible team.
Labels#
# Get resources labels
แ
k get po -A --show-labels
แ
k get no kube-worker-01 -o jsonpath='{.metadata.labels}' | jq
แ
k get no -o jsonpath='{range .items[*]}"{.metadata.name}:" {.metadata.labels}{"\n"}{end}' | jq
# Filter by labels
แ
k get po,deploy,svc -A -l app=guisamweb
แ
k delete po -l app=guisamweb
# Add a label
แ
k label nodes kube-worker-01 system=guisam
แ
k get no kube-worker-01 -o jsonpath='{.metadata.labels.system}'
guisam
# Remove a label
แ
k label nodes kube-worker-01 system-
nodeSelector#
nodeSelector
nodeSelector is the simplest recommended form of node selection constraint in Kubernetes, used to constrain pods to run only on nodes with specific labels.
Create a deployment with nodeSelector.
แ
k run theone --image=busybox --dry-run=client \
-o yaml --command -- sleep infinity > theone.yaml
แ
diff -u theone.yaml{.before,}
--- theone.yaml.before 2025-10-20 13:13:15.523430058 +0200
+++ theone.yaml 2025-10-20 13:13:58.149385136 +0200
@@ -14,4 +14,6 @@
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
+ nodeSelector:
+ run: theone
status: {}
Apply and check pod events.
แ
k apply -f theone.yaml
แ
k describe po -l run=theone | grep -A 999 Events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 41s default-scheduler 0/2 nodes are available: 2 node(s) didn't match Pod's node affinity/selector. no new claims to deallocate, preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
Create label and check pod creation.
แ
k label nodes kube-worker-01 run=theone
แ
k describe po -l run=theone | grep -A 999 Events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 6m49s default-scheduler 0/2 nodes are available: 2 node(s) didn't match Pod's node affinity/selector. no new claims to deallocate, preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
Normal Scheduled 5m default-scheduler Successfully assigned default/theone to kube-worker-01
Normal Pulling 4m59s kubelet Pulling image "busybox"
Normal Pulled 4m56s kubelet Successfully pulled image "busybox" in 2.686s (2.686s including waiting). Image size: 2223686 bytes.
Normal Created 4m56s kubelet Created container: theone
Normal Started 4m56s kubelet Started container theone
แ
k get po -l run=theone
NAME READY STATUS RESTARTS AGE
theone 1/1 Running 0 7m5s
Clean up.
แ
k delete -f theone.yaml
แ
k get no kube-worker-01 -o jsonpath='{.metadata.labels}' | jq
แ
k get no -l run=theone --show-labels
แ
k label nodes kube-worker-01 run-
Taints Tolerations#
Taints and Tolerations
Taints and Tolerations are a mechanism used to control pod scheduling by ensuring that pods are not scheduled onto inappropriate nodes.
Taints#
Taints
Taints are applied to nodes to repel a set of pods, making the node less desirable for certain workloads. This allows administrators to enforce constraints such as isolating workloads, dedicating nodes for specific purposes, or reserving nodes with special hardware for specific pods.
A taint has a key, an optional value, and an effect that determines how the taint affects scheduling (key=value:effect).
The primary effects are:
NoSchedule, which prevents non-tolerating pods from being scheduled on the node;PreferNoSchedule, which instructs the scheduler to avoid the node if possible but allows it if necessary;NoExecute, which evicts pods that do not tolerate the taint from the node and prevents new non-tolerating pods from joining.
Create a deployment.
แ
k create deploy taint-dep --dry-run=client \
--image=nginx:alpine --port=80 -o yaml --replicas=6 > taint-dep.yaml
Create taint and check.
แ
k taint no kube-worker-01 guisam=great:PreferNoSchedule
แ
k describe no | grep 'Name:\|Taints:'
แ
k apply -f taint-dep.yaml
# Get all pods with label app=taint-dep and their nodeName
แ
k get po -l app=taint-dep -o wide | awk 'NR>1{print $1,$7}'
แ
k get po -l app=taint-dep -o jsonpath='{range .items[*]}{.metadata.name} {.spec.nodeName}{"\n"}{end}'
แ
k taint no kube-worker-01 guisam-
แ
k delete deploy taint-dep
Tolerations#
Tolerations
Tolerations are applied to pods and allow them to be scheduled on nodes with matching taints. They act as exceptions that permit a pod to run on a tainted node. A toleration matches a taint if the key and effect are the same, and the operator and value (if specified) are compatible. For example, a toleration with operator โExistsโ matches any value for the key, while โEqualโ requires the value to be identical. Tolerations are defined in the pod specification and are evaluated by the scheduler alongside other factors like node affinity and resource availability.
แ
nodeSelector#
แ
kb create deploy guisamweb \
--dry-run=client --image=nginx:alpine \
--replicas=2 --port=80 \
-o yaml > guisamweb.yaml
```bash
แ
diff -u guisamweb.yaml{.before,}
--- guisamweb.yaml.before 2025-10-11 15:52:32.421221200 +0200
+++ guisamweb.yaml 2025-10-13 15:47:20.877447319 +0200
@@ -21,4 +21,6 @@
ports:
- containerPort: 80
resources: {}
+ nodeSelector:
+ system: guisam
status: {}
แ
kb apply -f guisamweb.yaml
แ
kb get po
NAME READY STATUS RESTARTS AGE
pod/guisamweb-59bf8958cd-j2kx9 0/1 Pending 0 31s
pod/guisamweb-59bf8958cd-zkgvw 0/1 Pending 0 31s
pod/nfs-subdir-external-provisioner-574bbb7bbf-hcp4m 1/1 Running 0 45m
แ
kb describe po guisamweb-59bf8958cd-j2kx9 | grep -A 9999 "^Events:"
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 72s default-scheduler 0/2 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. no new claims to dealloc
ate, preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
แ
kb label nodes kube-worker-01 system=guisam
kb get po -l app=guisamweb
NAME READY STATUS RESTARTS AGE
guisamweb-59bf8958cd-j2kx9 1/1 Running 0 5m48s
guisamweb-59bf8958cd-zkgvw 1/1 Running 0 5m48s
Affinity#
Affinty
Affinity is a scheduling mechanism that allows users to control where pods are placed within a cluster by defining rules based on node and pod labels.
There are two primary types of affinity: node affinity and pod affinity/anti-affinity.
requiredDuringSchedulingIgnoredDuringExecution preferredDuringSchedulingIgnoredDuringExecution