🏷️ Understand the primitives used to create robust, self-healing, application deployments

🏷️ Understand the primitives used to create robust, self-healing, application deployments#

ReplicaSet#

ReplicaSet

A ReplicaSet is a low-level controller that ensures a specified number of pod replicas are running at all times. If a pod fails or is deleted, the ReplicaSet automatically creates a replacement to maintain the desired state.

  • It provides basic scaling and self-healing mechanisms.

  • It uses selectors (including set-based selectors) to identify and manage pods.

  • Does not support rolling updates or rollbacks — changes to the pod template require manual intervention.

  • Often used indirectly through Deployments rather than directly by users.

rs.yaml

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: guisam-one
spec:
  replicas: 2
  selector:
    matchLabels:
      system: ReplicaOne
  template:
    metadata:
      labels:
        system: ReplicaOne
    spec:
      containers:
      - name: guisam-rs
        image: nginx:alpine

Create a replicaset.

 k apply -f rs.yaml
ᐅ k get rs,po
ᐅ k describe rs guisam-one

Change the label system on one pod.

 k edit pod/rs-one-m7jb7
ᐅ k get po -o jsonpath='{.items[*].metadata.labels.system}'
ReplicaOne IsolatedOne ReplicaOne
 k delete -f rs.yaml
ᐅ k delete pod/rs-one-m7jb7

DaemonSet#

DaemonSet

A DaemonSet ensures one pod runs on every node in the cluster. As nodes are added or removed, the DaemonSet automatically adjusts. Node targeting: Uses nodeSelector, tolerations, or affinity to control placement.

ds.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: guisam-ds
spec:
  selector:
    matchLabels:
      system: ds-one
  template:
    metadata:
      labels:
        system: ds-one
    spec:
      containers:
      - name: guisam-ds
        image: nginx:alpine
      nodeSelector:
        kubernetes.io/os: linux
      tolerations:
        - operator: Exists

StatefulSet#

StatefulSet

A StatefulSet manages stateful applications by ensuring each pod has a unique, stable identity and persistent storage. Pods are created in order and maintain a fixed hostname (e.g., app-0, app-1). Used for databases like MySQL, MongoDB, or Kafka where data consistency and identity matter.

  • Stable network identity: Each pod gets a predictable name and DNS record.

  • Persistent storage: Uses volumeClaimTemplates so each pod has its own PersistentVolume.

  • Ordered operations: Pods are scaled, updated, and terminated in sequence.

Job Cronjob#

Job and CronJob

Job and CronJob are resources used to manage batch and scheduled tasks within a cluster. A Job runs a task once or a specified number of times to completion, ideal for one-off operations like data processing or backups. A CronJob automates the execution of tasks at regular intervals using a cron-like syntax, making it suitable for recurring operations such as periodic backups, log cleanup, or data synchronization.

  • Job: Designed for running short-lived, one-time, or on-demand batch tasks. It ensures a specified number of pods successfully finish their tasks before the Job is considered complete, providing features like automatic retries on failure and parallel execution. Jobs are managed via YAML manifests and are used for tasks like data migration, processing, or ad-hoc scripts.

  • CronJob: A specialized Job that schedules tasks to run at specific times or intervals based on a cron expression (e.g., */5 * * * * for every 5 minutes). It creates a new Job instance according to the schedule, allowing for recurring execution of tasks like nightly backups or hourly data syncs. CronJobs offer control over concurrency (e.g., allowing, forbidding, or replacing overlapping jobs) and can manage the retention of old Job history.

Job#

Create a job manifest.

 k create job guisam-job \
--dry-run=client -o yaml --image=alpine > guisam-job.yaml

Update specifications.

ᐅ diff -u guisam-job.yaml.before guisam-job.yaml
--- guisam-job.yaml.before      2025-10-08 15:37:40.746895109 +0200
+++ guisam-job.yaml     2025-10-08 15:43:04.248469755 +0200
@@ -3,6 +3,9 @@
 metadata:
   name: guisam-job
 spec:
+  completions: 5
+  parallelism: 2
+  activeDeadlineSeconds: 15
   template:
     metadata: {}
     spec:
@@ -10,5 +13,6 @@
       - image: alpine
         name: guisam-job
         resources: {}
+        command: ["/bin/sleep", "5"]
       restartPolicy: Never
 status: {}

Check the expected failure.

 k create -f guisam-job.yaml
ᐅ k get job
NAME         STATUS          COMPLETIONS   DURATION   AGE
guisam-job   FailureTarget   2/5           21s        21s
ᐅ k describe job guisam-job | awk 'END{print}'
  Warning  DeadlineExceeded  22s   job-controller  Job was active longer than specified deadline
 k delete -f guisam-job.yaml

CronJob#

Create a cronjob manifest.

 k create cronjob guisam-cronjob \
--dry-run=client --image=alpine \
--schedule="*/1 * * * *" -o yaml \
> guisam-cronjob.yaml

Update the manifest.

ᐅ diff -u guisam-cronjob.yaml{.before,}
--- guisam-cronjob.yaml.before  2025-10-08 16:05:35.577513230 +0200
+++ guisam-cronjob.yaml 2025-10-08 16:16:04.486472549 +0200
@@ -7,13 +7,17 @@
     metadata:
       name: guisam-cronjob
     spec:
+      parallelism: 2
+      completions: 2
       template:
         metadata: {}
         spec:
+          activeDeadlineSeconds: 25
           containers:
           - image: alpine
             name: guisam-cronjob
             resources: {}
+            command: ["/bin/sleep", "5"]
           restartPolicy: OnFailure
  schedule: '*/1 * * * *'

Apply the manifest.

 k apply -f guisam-cronjob.yaml

Check completions and parallelism.

 k get cronjob,job,po
NAME                           SCHEDULE      TIMEZONE   SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob.batch/guisam-cronjob   */1 * * * *   <none>     False     1        4s              2m52s

NAME                                STATUS     COMPLETIONS   DURATION   AGE
job.batch/guisam-cronjob-29332217   Complete   2/2           9s         2m4s
job.batch/guisam-cronjob-29332218   Complete   2/2           10s        64s
job.batch/guisam-cronjob-29332219   Running    0/2           4s         4s

NAME                                READY   STATUS      RESTARTS   AGE
pod/guisam-cronjob-29332217-gnjnq   0/1     Completed   0          2m4s
pod/guisam-cronjob-29332217-z755t   0/1     Completed   0          2m4s
pod/guisam-cronjob-29332218-h5x9k   0/1     Completed   0          64s
pod/guisam-cronjob-29332218-jrljx   0/1     Completed   0          64s
pod/guisam-cronjob-29332219-7gm9d   1/1     Running     0          4s
pod/guisam-cronjob-29332219-mcs7k   1/1     Running     0          4s
 k delete -f guisam-cronjob.yaml

HorizontalPodAutoscaler#

HPA

The Horizontal Pod Autoscaler (HPA) is a core component that automatically adjusts the number of pod replicas in a deployment, replica set, or stateful set based on observed metrics such as CPU utilization, memory usage, or custom metrics. It operates as a control loop within the Kubernetes control plane, periodically checking metrics against user-defined targets and scaling the workload up or down to maintain optimal performance and resource efficiency.

 k autoscale deploy my-nginx \
--dry-run=client --max=4 --cpu-percent=50 \
-o yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  creationTimestamp: null
  name: my-nginx
spec:
  maxReplicas: 4
  metrics:
  - resource:
      name: cpu
      target:
        averageUtilization: 50
        type: Utilization
    type: Resource
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-nginx
status:
  currentMetrics: null
  desiredReplicas: 0