Kubernetes Cluster on macOS

A step-by-step, beginner-friendly walkthrough for running a real multi-node Kubernetes cluster on your Mac using kind (Kubernetes in Docker) and deploying a static HTML page with nginx.

macOS Docker kind kubectl nginx

๐Ÿ“ Quick Note

"Kubernetes on Docker Compose" isn't a standard combo. Docker Compose and Kubernetes are two different orchestrators. This tutorial uses kind, which runs a real Kubernetes cluster using Docker containers as nodes โ€” the closest thing to "K8s on Docker" on a Mac.

Part 1 ยท Step-by-Step Tutorial

Follow these steps in order. Each one builds on the previous.

1

Install Prerequisites

Install Homebrew, Docker Desktop, kind, and kubectl.

# Install Homebrew (if not installed)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

# Install Docker Desktop
brew install --cask docker

# Install kind and kubectl
brew install kind kubectl

# Verify
docker --version
kind --version
kubectl version --client
2

Create the Cluster Config

Create a working directory and a kind-config.yaml file.

mkdir ~/k8s-demo && cd ~/k8s-demo

kind-config.yaml:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
    extraPortMappings:
      - containerPort: 30080
        hostPort: 8080
        protocol: TCP
  - role: worker
  - role: worker
3

Create the Cluster

kind create cluster --name demo --config kind-config.yaml

# Verify
kubectl cluster-info --context kind-demo
kubectl get nodes
4

Create Static HTML

Create index.html:

<!DOCTYPE html>
<html>
<head><title>Hello from K8s</title></head>
<body>
  <h1>Hello from Kubernetes on Mac!</h1>
  <p>Served by nginx in a kind cluster.</p>
</body>
</html>
5

Load HTML into a ConfigMap

kubectl create configmap html-content --from-file=index.html

# Verify
kubectl get configmap html-content -o yaml
6

Create Deployment & Service

nginx-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-static
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-static
  template:
    metadata:
      labels:
        app: nginx-static
    spec:
      containers:
        - name: nginx
          image: nginx:alpine
          ports:
            - containerPort: 80
          volumeMounts:
            - name: html
              mountPath: /usr/share/nginx/html
      volumes:
        - name: html
          configMap:
            name: html-content
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-static
spec:
  type: NodePort
  selector:
    app: nginx-static
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30080
7

Apply & View

kubectl apply -f nginx-deployment.yaml
kubectl get pods -w
kubectl get svc nginx-static

# Then open in browser:
# http://localhost:8080
8

Cleanup

kubectl delete -f nginx-deployment.yaml
kubectl delete configmap html-content
kind delete cluster --name demo

Part 2 ยท Deep Explanations

What each command, file, and setting actually does โ€” in plain language.

๐Ÿบ

Homebrew

Homebrew is macOS's package manager โ€” think of it as the "App Store for command-line tools." Instead of downloading zip files and dragging apps around, you type brew install something and it handles everything.

๐Ÿ“– brew.sh โ†’
๐Ÿณ

Docker Desktop

Docker lets you run apps in "containers" โ€” isolated mini-environments. On Mac, Docker Desktop runs a tiny Linux VM under the hood because Docker is Linux-native. kind uses Docker to create each Kubernetes node as a container.

๐Ÿ“– Docker Docs โ†’
โ˜ธ๏ธ

kind (K8s in Docker)

kind runs Kubernetes nodes as Docker containers instead of real VMs or machines. Perfect for local development and testing. Each "node" you see in kubectl get nodes is actually a Docker container.

๐Ÿ“– kind Docs โ†’
๐ŸŽฎ

kubectl

kubectl (pronounced "koob-control" or "koob-cuttle") is the command-line remote control for Kubernetes. Every time you want to see, create, or delete something in your cluster, you use kubectl.

๐Ÿ“– kubectl Docs โ†’
๐Ÿ“ฆ

Pods

A Pod is the smallest unit in Kubernetes โ€” it wraps one (or more) containers. You don't usually create pods directly; you create a Deployment and it creates pods for you. If a pod dies, the Deployment makes a new one.

๐Ÿ“– Pods โ†’
๐Ÿš€

Deployment

A Deployment is a "blueprint + manager" for your pods. It says "I want 2 copies of this nginx container running at all times" and K8s makes sure that's always true โ€” even if a pod crashes or a node dies.

๐Ÿ“– Deployments โ†’
๐ŸŒ

Service

Pods come and go, and each gets a new IP. A Service gives you one stable address that always forwards to the live pods. Think of it as a receptionist that knows which pods are currently awake.

๐Ÿ“– Services โ†’
โš™๏ธ

ConfigMap

A ConfigMap stores configuration data (files, environment variables, key-value pairs) separately from your container image. We use one here to inject our index.html into nginx without rebuilding the image.

๐Ÿ“– ConfigMaps โ†’
๐Ÿ”Œ

NodePort & Port Mapping

NodePort opens a port on every cluster node (here 30080). In kind, nodes are Docker containers, so we also map container port 30080 โ†’ Mac host port 8080 in the kind config. That's the chain that lets you visit localhost:8080.

๐Ÿ“– NodePort โ†’
๐Ÿ“„

YAML Manifests

Kubernetes is "declarative" โ€” you describe what you want in a YAML file and K8s figures out how to make it happen. The --- separator lets you put multiple resources (like Deployment + Service) in one file.

๐Ÿ“– K8s Objects โ†’

Part 3 ยท Helm & Tools for Defining Kubernetes

Raw YAML works for a few files, but real apps have dozens of manifests across multiple environments. These tools solve that.

โŽˆ

Helm โ€” The Package Manager for Kubernetes

Helm is to Kubernetes what apt is to Ubuntu or brew is to macOS. Instead of writing 10 raw YAML files, you write a chart โ€” a parameterized template bundle โ€” and install it with one command. Change the values file, get a different deployment. It's how 90% of the K8s ecosystem ships software.

Templates Versioning Rollbacks Dependencies
๐Ÿ“– helm.sh

Helm in 30 seconds

# Install Helm
brew install helm

# Add a chart repository
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

# Install PostgreSQL in one command (no YAML writing!)
helm install my-db bitnami/postgresql \
  --set auth.postgresPassword=secret123 \
  --set primary.persistence.size=10Gi

# List what you've installed
helm list

# Upgrade with new values
helm upgrade my-db bitnami/postgresql --set primary.persistence.size=20Gi

# Rollback instantly if something breaks
helm rollback my-db 1

# Uninstall everything cleanly
helm uninstall my-db

Other Tools to Define K8s

Helm isn't the only option. Each of these solves a slightly different problem.

๐ŸŽจ

Kustomize

Overlay-based patching. Built into kubectl. No templating โ€” just base YAML + patches per environment.

kustomize.io โ†’
๐Ÿš€

Argo CD

GitOps controller. Watches a Git repo and auto-syncs any YAML/Helm/Kustomize changes to your cluster.

argo-cd docs โ†’
๐Ÿ”„

Flux CD

Another GitOps engine, CNCF graduated. Native Helm & Kustomize support, multi-tenant friendly.

fluxcd.io โ†’
๐Ÿ—๏ธ

Terraform

Infrastructure-as-code for everything โ€” cloud VMs, databases, AND K8s resources via the Kubernetes provider.

terraform.io โ†’
๐ŸŒŠ

Pulumi

Write K8s manifests in real programming languages โ€” TypeScript, Python, Go, C#. No YAML at all.

pulumi.com โ†’
๐ŸŽฏ

Jsonnet / CUE

Data-templating languages favored by Google/Grafana Labs. More type-safe than Helm's Go templates.

cuelang.org โ†’
โšก

Skaffold

Google's dev loop tool โ€” builds images, deploys to K8s, and hot-reloads on file changes. Great with kind.

skaffold.dev โ†’
๐Ÿ”ง

Tilt

Microservice dev environment โ€” Tiltfiles define multi-service setups with one command. Live UI included.

tilt.dev โ†’
๐ŸŒ

Crossplane

Manage cloud infra (RDS, S3, GCS) as K8s resources. "Everything is a CRD" philosophy.

crossplane.io โ†’

Part 4 ยท Real Scenarios

Two worked examples: a simple HA static site, and a production-style 3-tier app.

SCENARIO 1 ยท BEGINNER

Simple HTML Site with Failover

A static HTML page that keeps serving even if a pod (or whole node) dies.

The failover ingredients

  • replicas: 3 โ€” three pods, so losing one doesn't interrupt service
  • podAntiAffinity โ€” spread pods across different nodes
  • livenessProbe โ€” restart stuck containers automatically
  • readinessProbe โ€” don't send traffic to pods that aren't ready
  • PodDisruptionBudget โ€” guarantee at least 2 pods stay up during maintenance

nginx-ha.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ha
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-ha
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  template:
    metadata:
      labels:
        app: nginx-ha
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app: nginx-ha
                topologyKey: kubernetes.io/hostname
      containers:
        - name: nginx
          image: nginx:alpine
          ports:
            - containerPort: 80
          resources:
            requests:
              cpu: 50m
              memory: 64Mi
            limits:
              cpu: 200m
              memory: 128Mi
          livenessProbe:
            httpGet:
              path: /
              port: 80
            initialDelaySeconds: 5
            periodSeconds: 10
          readinessProbe:
            httpGet:
              path: /
              port: 80
            initialDelaySeconds: 2
            periodSeconds: 5
          volumeMounts:
            - name: html
              mountPath: /usr/share/nginx/html
      volumes:
        - name: html
          configMap:
            name: html-content
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-ha
spec:
  type: NodePort
  selector:
    app: nginx-ha
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30080
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: nginx-ha-pdb
spec:
  minAvailable: 2
  selector:
    matchLabels:
      app: nginx-ha

Try the failover yourself

kubectl apply -f nginx-ha.yaml

# Verify 3 pods running
kubectl get pods -l app=nginx-ha

# Simulate a pod crash โ€” deployment auto-replaces it
kubectl delete pod -l app=nginx-ha --field-selector=status.phase=Running | head -n1

# Watch it recover
kubectl get pods -l app=nginx-ha -w

# Test with continuous curl while killing pods
while true; do curl -s http://localhost:8080 | grep -o "Hello"; sleep 0.5; done
SCENARIO 2 ยท ADVANCED

3-Tier App: Frontend + API + Database

A realistic production-style setup: React/Vue frontend, Node/Python API, PostgreSQL database, all wired together.

Architecture at a glance

  Internet
     โ”‚
     โ–ผ
  [ Ingress ]  โ† TLS termination, routing
     โ”‚
     โ”œโ”€โ”€โ”€ /           โ†’ [ Frontend Deployment ]  (nginx + built SPA, 3 replicas)
     โ”‚
     โ””โ”€โ”€โ”€ /api/*      โ†’ [ API Deployment ]       (Node/Python, 3 replicas)
                              โ”‚
                              โ–ผ
                       [ Postgres Service ]
                              โ”‚
                              โ–ผ
                       [ StatefulSet ]           (1 primary, persistent volume)
                              โ”‚
                              โ–ผ
                       [ PersistentVolumeClaim ] (10 GiB disk)

  Plus: [ Secret (db creds) ]  [ ConfigMap (api config) ]  [ HPA (autoscale) ]

The pieces you'd deploy

๐Ÿ–ฅ๏ธ Frontend
Deployment ยท Service ยท HPA ยท ConfigMap (env.js)
โš™๏ธ API
Deployment ยท Service ยท HPA ยท ConfigMap ยท Secret
๐Ÿ—„๏ธ Database
StatefulSet ยท Headless Service ยท PVC ยท Secret

Excerpted manifests (full file is ~300 lines โ€” the key parts):

# --- DATABASE (StatefulSet for stable identity + persistent disk) ---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: postgres
spec:
  serviceName: postgres
  replicas: 1
  selector:
    matchLabels: { app: postgres }
  template:
    metadata:
      labels: { app: postgres }
    spec:
      containers:
        - name: postgres
          image: postgres:16-alpine
          ports:
            - containerPort: 5432
          env:
            - name: POSTGRES_DB
              value: appdb
            - name: POSTGRES_USER
              valueFrom:
                secretKeyRef: { name: db-creds, key: username }
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef: { name: db-creds, key: password }
          volumeMounts:
            - name: data
              mountPath: /var/lib/postgresql/data
  volumeClaimTemplates:
    - metadata: { name: data }
      spec:
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests: { storage: 10Gi }
---
# --- API Deployment ---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
spec:
  replicas: 3
  selector:
    matchLabels: { app: api }
  template:
    metadata:
      labels: { app: api }
    spec:
      containers:
        - name: api
          image: myorg/api:v1.2.0
          ports:
            - containerPort: 3000
          env:
            - name: DATABASE_URL
              value: "postgres://$(DB_USER):$(DB_PASS)@postgres:5432/appdb"
            - name: DB_USER
              valueFrom:
                secretKeyRef: { name: db-creds, key: username }
            - name: DB_PASS
              valueFrom:
                secretKeyRef: { name: db-creds, key: password }
          readinessProbe:
            httpGet: { path: /health, port: 3000 }
          livenessProbe:
            httpGet: { path: /health, port: 3000 }
            initialDelaySeconds: 15
---
# --- FRONTEND Deployment ---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels: { app: frontend }
  template:
    metadata:
      labels: { app: frontend }
    spec:
      containers:
        - name: web
          image: myorg/frontend:v1.2.0
          ports:
            - containerPort: 80
---
# --- INGRESS (routes / to frontend, /api/* to api) ---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: app-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
    - host: myapp.local
      http:
        paths:
          - path: /api
            pathType: Prefix
            backend:
              service:
                name: api
                port: { number: 3000 }
          - path: /
            pathType: Prefix
            backend:
              service:
                name: frontend
                port: { number: 80 }
---
# --- HPA autoscaling for API ---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: api-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: api
  minReplicas: 3
  maxReplicas: 20
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70

The Helm version (much shorter)

# Instead of 300 lines of YAML, use a Helm chart:
helm repo add bitnami https://charts.bitnami.com/bitnami

# Install Postgres
helm install db bitnami/postgresql \
  --set auth.database=appdb \
  --set primary.persistence.size=10Gi

# Install your own chart
helm install myapp ./charts/myapp \
  --values values-prod.yaml \
  --set api.image.tag=v1.2.0 \
  --set frontend.image.tag=v1.2.0

Part 6 ยท Observability, Security & Production Readiness

Deploying an app is step one. Keeping it healthy, secure, and observable is where real operations begin.

๐Ÿ“Š

"You can't fix what you can't see"

Observability is the trio of metrics (numbers over time), logs (what happened), and traces (how a request flowed). Security is RBAC, NetworkPolicies, and Secrets. Production-readiness is resource limits, autoscaling, backups, and health checks. This section covers all of it.

Prometheus Grafana Loki RBAC NetworkPolicy HPA

๐Ÿ”ญ Observability Stack

๐Ÿ”ฅ

Prometheus

The de-facto K8s metrics database. Pull-based: it scrapes /metrics endpoints from your pods every 15s and stores time-series data.

prometheus.io โ†’
๐Ÿ“ˆ

Grafana

Visualization layer on top of Prometheus/Loki/etc. Dashboards, alerts, annotations. The "face" of most observability stacks.

grafana.com โ†’
๐Ÿ“œ

Loki

"Prometheus for logs." Indexes only labels, not content โ€” so it's dramatically cheaper than Elasticsearch.

grafana loki โ†’
๐Ÿ”

OpenTelemetry

Vendor-neutral standard for metrics, logs, and traces. Instrument once, send anywhere (Jaeger, Tempo, Datadog, etc).

opentelemetry.io โ†’
๐Ÿงญ

Jaeger / Tempo

Distributed tracing backends. See how a single request flows across 20 microservices and where it slowed down.

jaegertracing.io โ†’
๐Ÿšจ

Alertmanager

Routes Prometheus alerts to Slack, PagerDuty, email, etc. Groups, deduplicates, and silences alerts intelligently.

alertmanager โ†’

Install the full stack in your kind cluster

The kube-prometheus-stack Helm chart gives you Prometheus, Grafana, Alertmanager, node-exporter, and a ton of pre-built dashboards in one command.

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

# Install everything in a monitoring namespace
helm install kps prometheus-community/kube-prometheus-stack \
  --namespace monitoring \
  --create-namespace

# Port-forward Grafana (default: admin / prom-operator)
kubectl port-forward -n monitoring svc/kps-grafana 3000:80

# Port-forward Prometheus
kubectl port-forward -n monitoring svc/kps-kube-prometheus-stack-prometheus 9090:9090

# Add Loki for logs
helm install loki grafana/loki-stack --namespace monitoring

๐Ÿ” Security Basics

๐ŸŽซ

RBAC (Role-Based Access Control)

Controls who can do what in your cluster. Users, groups, and ServiceAccounts get bound to Roles (namespaced) or ClusterRoles (cluster-wide). Principle of least privilege.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-reader
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "list", "watch"]
๐Ÿ›ก๏ธ

NetworkPolicy

Pod-level firewall rules. By default, every pod can talk to every other pod โ€” NetworkPolicies let you say "only the API pods can reach the DB pod."

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: db-allow-api
spec:
  podSelector:
    matchLabels: { app: postgres }
  ingress:
    - from:
        - podSelector:
            matchLabels: { app: api }
๐Ÿ”‘

Secrets & External Secrets

K8s Secrets store passwords/tokens (base64-encoded, not encrypted by default!). In production, pair with Sealed Secrets, External Secrets Operator, or HashiCorp Vault.

kubectl create secret generic db-creds \
  --from-literal=username=admin \
  --from-literal=password='s3cret!'

# Better: External Secrets Operator pulls from Vault/AWS SM
kubectl apply -f external-secret.yaml
๐Ÿฆ…

Falco & OPA Gatekeeper

Runtime security (Falco detects suspicious syscalls) and policy enforcement (Gatekeeper blocks non-compliant resources at admission time). CNCF graduated projects.

# Install Falco for runtime threat detection
helm install falco falcosecurity/falco \
  --namespace falco --create-namespace

# Install OPA Gatekeeper for admission policies
helm install gatekeeper gatekeeper/gatekeeper \
  --namespace gatekeeper-system --create-namespace
๐Ÿ“– Falco โ†’

โœ… Production Readiness Checklist

๐Ÿงฑ Reliability

  • โœ“Resource requests & limits on every container (no noisy neighbors)
  • โœ“Liveness & readiness probes on every pod
  • โœ“PodDisruptionBudget to survive node drains
  • โœ“HPA (Horizontal Pod Autoscaler) for traffic spikes
  • โœ“Multi-AZ node pools for disaster recovery
  • โœ“Backups (Velero) for cluster state + PVCs

๐Ÿ”’ Security & Ops

  • โœ“Non-root containers + read-only root filesystem
  • โœ“Image scanning (Trivy, Snyk) in CI pipeline
  • โœ“NetworkPolicies to limit pod-to-pod traffic
  • โœ“RBAC with least-privilege ServiceAccounts
  • โœ“Secrets from Vault/External Secrets, not Git
  • โœ“Centralized logging + metrics + alerts

Part 7 ยท Rancher โ€” Kubernetes Management Platform

Once you have more than one cluster (dev, staging, prod, edge locations...), managing them with just kubectl gets painful. Rancher solves that.

๐Ÿ„

Rancher by SUSE

Rancher is an open-source multi-cluster Kubernetes management platform. Think of it as "mission control" for all your K8s clusters โ€” whether they're in AWS, GCP, Azure, on-prem VMware, bare metal, or edge devices. One web UI, one auth system, one policy layer, all your clusters.

Multi-Cluster Centralized Auth App Catalog Fleet GitOps Open Source
๐Ÿ“– rancher.com

What Rancher Actually Does

๐ŸŒ

Multi-Cluster Management

Import existing clusters (EKS, GKE, AKS, kind, k3s, anything) or provision new ones. Single pane of glass for all of them.

๐Ÿ‘ฅ

Centralized Auth & RBAC

Plug in Active Directory, LDAP, SAML, GitHub, Okta โ€” once. Users get consistent RBAC across every cluster.

๐Ÿ“ฆ

App Catalog

Curated Helm chart library with a one-click install UI. Internal teams can publish private charts for their apps.

๐Ÿš€

Fleet (GitOps at Scale)

Rancher's built-in GitOps engine. Push a change to Git, Fleet rolls it out to 100+ clusters with group targeting.

๐Ÿ“Š

Built-in Monitoring

One-click install of Prometheus + Grafana + Alertmanager, pre-configured and integrated with Rancher's UI.

๐Ÿ”ง

CLI + Web UI + API

kubectl still works perfectly. Rancher UI is for visibility and day-2 ops. Full REST API for automation.

The Rancher Family of Projects

SUSE (who acquired Rancher Labs in 2020) maintains several related projects that often get confused. Here's the map:

๐Ÿ„

Rancher (Manager)

The multi-cluster management platform itself. Runs on any K8s cluster and manages all the others.

Use when: You have 2+ clusters or multi-tenant requirements.
๐Ÿ‘

k3s

Lightweight certified K8s distribution. A single 60MB binary. Runs on Raspberry Pi, IoT, edge, and embedded systems.

Use when: Resources are tight or you're at the edge.
โ˜ธ๏ธ

RKE2 (Rancher Kubernetes Engine 2)

Security-focused, FIPS-compliant K8s distribution for government/regulated environments. Successor to original RKE.

Use when: You need hardened K8s for on-prem or gov workloads.
๐Ÿšข

Fleet

GitOps engine built into Rancher, but also usable standalone. Designed to manage 1,000,000+ clusters from one Git repo.

Use when: You need GitOps across many clusters.
๐Ÿ˜

Longhorn

Cloud-native distributed block storage for K8s. Snapshots, backups, replication โ€” all via CRDs and a slick UI.

Use when: You need stateful workloads without a cloud storage provider.
๐Ÿ”

NeuVector

Container security platform โ€” runtime protection, vulnerability scanning, compliance, zero-trust network segmentation.

Use when: Security/compliance is top priority.

๐Ÿš€ Install Rancher in your kind cluster

You can run Rancher locally to try it out. Here's the fastest path using Helm:

# Add the Rancher Helm repo
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
helm repo update

# cert-manager is required (Rancher uses it for TLS)
helm repo add jetstack https://charts.jetstack.io
helm install cert-manager jetstack/cert-manager \
  --namespace cert-manager --create-namespace \
  --set installCRDs=true

# Create the namespace for Rancher
kubectl create namespace cattle-system

# Install Rancher (self-signed cert for local use)
helm install rancher rancher-latest/rancher \
  --namespace cattle-system \
  --set hostname=rancher.localhost \
  --set replicas=1 \
  --set bootstrapPassword=admin

# Wait for it to come up
kubectl -n cattle-system rollout status deploy/rancher

# Port-forward and open in browser
kubectl port-forward -n cattle-system svc/rancher 8443:443
# Then visit: https://rancher.localhost:8443

Alternative: single-container Docker install (fastest for a quick look):

docker run -d --restart=unless-stopped \
  -p 80:80 -p 443:443 \
  --privileged \
  rancher/rancher:latest

# Then browse to https://localhost and grab the bootstrap password:
docker logs <container-id> 2>&1 | grep "Bootstrap Password:"

Real-World Use Cases for Rancher

๐Ÿฌ Retail & Edge Computing

Companies like Walmart and Lowe's run K8s clusters in every store (point-of-sale, inventory, signage). Rancher + k3s manages thousands of these edge clusters from HQ with Fleet pushing updates overnight.

๐Ÿฆ Regulated Enterprises

Banks and insurance firms use Rancher + RKE2 on-prem because they can't (or won't) put data in public clouds. Rancher gives them a cloud-like UX on bare metal.

๐Ÿฅ Healthcare & Life Sciences

Hospital systems run HIPAA-compliant workloads across on-prem and cloud. Rancher's unified RBAC and audit logging simplify compliance reporting across heterogeneous clusters.

๐ŸŽ“ Platform Engineering Teams

Internal platform teams at mid-to-large companies use Rancher as their "internal developer portal." Dev teams get project-scoped namespaces with self-service access via the Rancher UI.

๐Ÿšœ Industrial IoT

Factories, wind farms, oil rigs. k3s on the ruggedized hardware, Rancher back at HQ. Deutsche Telekom, Siemens, and various manufacturers use this pattern.

๐ŸŒ Multi-Cloud Strategy

Companies hedging against cloud vendor lock-in run workloads across AWS, GCP, and Azure simultaneously. Rancher abstracts the differences so developers don't need to care which cloud a pod lands on.

Rancher vs. Alternatives

Tool Multi-Cluster Open Source Best For
Rancher โœ… Native โœ… Apache 2.0 Hybrid/multi-cloud, edge, on-prem
OpenShift โœ… (via ACM) โš ๏ธ Open Core Red Hat enterprise shops
Google Anthos โœ… Native โŒ Commercial GCP-centric multi-cloud
VMware Tanzu โœ… (via TMC) โŒ Commercial VMware vSphere environments
Portainer โœ… โš ๏ธ Open Core Smaller teams, Docker + K8s
Lens IDE โœ… (desktop) โœ… MIT Individual developers

Part 9 ยท CI/CD Pipelines & The Complete Picture

How code goes from a developer's laptop to running pods in production โ€” automatically.

๐Ÿ”„

From git push to production pods

CI/CD (Continuous Integration / Continuous Delivery) is the bridge between "code on a laptop" and "running in the cluster." In Kubernetes-world, there are two main patterns: push-based CI/CD (pipeline runs kubectl/Helm to deploy) and GitOps (pipeline commits to a Git repo; a controller in the cluster syncs from Git). GitOps has become the dominant pattern for production K8s.

GitHub Actions Tekton Jenkins X Argo Workflows GitOps

๐ŸŽฏ The Big Picture โ€” How Everything Fits Together

Every tool in this tutorial (Parts 1โ€“8) has a place in the end-to-end flow. Here's how they connect:

1. CODE 2. BUILD & TEST 3. PACKAGE 4. DEPLOY (GitOps) 5. RUN ๐Ÿ‘จโ€๐Ÿ’ป Developer writes code git push ๐Ÿ“ฆ Source Repo GitHub / GitLab app code โš™๏ธ CI Pipeline GitHub Actions / Tekton Jenkins X / GitLab CI ๐Ÿงช Unit + Integration Run tests in kind cluster ๐Ÿ”Ž Security Scan Trivy / Snyk / Grype ๐Ÿ—๏ธ Build Image Docker / Buildpacks / Kaniko ๐Ÿณ Container Registry Docker Hub / GHCR ECR / GCR / Harbor myapp:v1.2.3 โŽˆ Helm Charts Values templated per env ๐Ÿ“˜ GitOps Repo Desired cluster state Updated image tags ๐Ÿš€ GitOps Agent Argo CD / Flux CD polls Git, reconciles ๐Ÿ„ Rancher Multi-cluster mgmt & UI โ˜ธ๏ธ K8s Cluster Deployment replicas: 3 Service + Ingress TLS + routing ConfigMaps + Secrets HPA + PDB autoscale + availability ๐Ÿ“Š Observability Stack ๐Ÿ”ฅ Prometheus ๐Ÿ“ˆ Grafana ๐Ÿ“œ Loki metrics ยท logs ยท traces ยท alerts ๐Ÿ” Security Layer (cross-cutting) ๐ŸŽซ RBAC ๐Ÿ›ก๏ธ NetworkPolicy ๐Ÿ”‘ Vault ๐Ÿฆ… Falco applied at every stage ๐ŸŒ End User via LoadBalancer / Ingress update tag pull image scrape metrics The Flow in 8 Steps: 1. Developer pushes code to the source repo on GitHub/GitLab. 2. CI pipeline triggers: runs tests (often inside a kind cluster), scans for vulnerabilities. 3. CI builds a Docker image and pushes it to the container registry with a version tag. 4. CI commits the new image tag to a GitOps repo (Helm values or Kustomize overlay). 5. GitOps agent (Argo CD or Flux) inside the cluster detects the Git change. 6. Agent renders the manifests (via Helm/Kustomize) and applies them to K8s โ€” cluster pulls the new image. 7. Kubernetes rolls out pods gradually, respecting PDBs; HPA scales with traffic. 8. Prometheus scrapes metrics; Grafana dashboards light up; Alertmanager pages on-call if things go wrong.

๐Ÿ› ๏ธ CI/CD Tools for Kubernetes

๐Ÿ™

GitHub Actions

YAML workflows in .github/workflows/. Free for public repos, generous free tier for private. Huge marketplace of pre-built actions.

docs โ†’
๐ŸฆŠ

GitLab CI/CD

Built into GitLab. One .gitlab-ci.yml file. Native K8s integration with auto-DevOps feature that infers pipelines.

docs โ†’
๐ŸŽฏ

Tekton

K8s-native CI/CD. Every step is a pod. Tasks and Pipelines are CRDs. CNCF graduated. The engine behind OpenShift Pipelines and Jenkins X.

tekton.dev โ†’
๐Ÿคต

Jenkins X

Opinionated cloud-native successor to Jenkins, built on Tekton + Argo CD. Full GitOps workflow out of the box.

jenkins-x.io โ†’
๐Ÿ™

Argo Workflows

K8s-native workflow engine. Sibling to Argo CD. Great for CI pipelines, ML training, data processing โ€” anything DAG-shaped.

docs โ†’
โญ•

CircleCI

SaaS CI known for speed and developer experience. First-class K8s deployment support via orbs (reusable config packages).

docs โ†’
๐Ÿš

Drone CI

Simple, container-native CI. Every step runs in a Docker container. Self-hostable and lightweight. Owned by Harness now.

drone.io โ†’
๐Ÿชฃ

Buildkite

Hybrid: SaaS control plane + self-hosted agents. Popular at Shopify, Airbnb, Slack for running builds on their own infrastructure.

docs โ†’
๐Ÿš

Harness

Commercial continuous delivery platform with strong progressive delivery, rollback automation, and cloud cost management.

docs โ†’

๐Ÿ“ Real Pipeline Examples

๐Ÿ™ GitHub Actions: Full Build โ†’ Push โ†’ GitOps Update

.github/workflows/deploy.yml

name: Build and Deploy

on:
  push:
    branches: [main]

env:
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
      - run: npm ci
      - run: npm test

  build-and-push:
    needs: test
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write
    steps:
      - uses: actions/checkout@v4
      - uses: docker/setup-buildx-action@v3

      - uses: docker/login-action@v3
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Extract version
        id: meta
        run: echo "version=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT

      - name: Security scan with Trivy
        uses: aquasecurity/trivy-action@master
        with:
          image-ref: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ steps.meta.outputs.version }}
          exit-code: '1'
          severity: 'CRITICAL,HIGH'

      - uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: |
            ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ steps.meta.outputs.version }}
            ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest

  update-gitops:
    needs: build-and-push
    runs-on: ubuntu-latest
    steps:
      - name: Checkout GitOps repo
        uses: actions/checkout@v4
        with:
          repository: myorg/gitops-manifests
          token: ${{ secrets.GITOPS_TOKEN }}

      - name: Update image tag in Helm values
        run: |
          sed -i "s|tag: .*|tag: ${{ needs.build-and-push.outputs.version }}|" \
            apps/myapp/values-prod.yaml

      - name: Commit and push
        run: |
          git config user.name "ci-bot"
          git config user.email "[email protected]"
          git add .
          git commit -m "chore: bump myapp to ${{ needs.build-and-push.outputs.version }}"
          git push
          # Argo CD will now sync this change to the cluster automatically

๐ŸŽฏ Tekton: K8s-Native Pipeline

Tekton runs inside your K8s cluster โ€” each Task is a pod.

apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
  name: build-and-deploy
spec:
  params:
    - name: repo-url
    - name: image-name
  workspaces:
    - name: shared-data
  tasks:
    - name: fetch-source
      taskRef:
        name: git-clone
      workspaces:
        - name: output
          workspace: shared-data
      params:
        - name: url
          value: $(params.repo-url)

    - name: run-tests
      runAfter: [fetch-source]
      taskRef:
        name: npm-test
      workspaces:
        - name: source
          workspace: shared-data

    - name: build-image
      runAfter: [run-tests]
      taskRef:
        name: kaniko           # builds images without Docker daemon
      params:
        - name: IMAGE
          value: $(params.image-name)
      workspaces:
        - name: source
          workspace: shared-data

    - name: deploy
      runAfter: [build-image]
      taskRef:
        name: kubernetes-actions
      params:
        - name: script
          value: |
            kubectl set image deployment/myapp \
              myapp=$(params.image-name)
---
# Trigger it with:
# tkn pipeline start build-and-deploy \
#   -p repo-url=https://github.com/me/app \
#   -p image-name=ghcr.io/me/app:v1 \
#   -w name=shared-data,claimName=ws-pvc

๐Ÿคต Jenkins X: Preview Environments per PR

Jenkins X auto-generates a preview environment for every pull request. jenkins-x.yml:

buildPack: none
pipelineConfig:
  pipelines:
    pullRequest:
      pipeline:
        stages:
          - name: ci
            steps:
              - name: build
                image: golang:1.21
                command: go build ./...
              - name: test
                command: go test ./...
              - name: build-image
                command: /kaniko/executor --dockerfile=Dockerfile
    release:
      pipeline:
        stages:
          - name: release
            steps:
              - name: promote
                # Jenkins X auto-promotes to staging
                # then waits for manual approval to prod
                command: jx promote --version $VERSION --env staging

Every PR gets its own namespace with a live URL to click and test โ€” SREs call it "PR-driven development."

๐ŸฆŠ GitLab CI: Multi-stage with K8s Deploy

stages:
  - test
  - build
  - scan
  - deploy

variables:
  IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA

test:
  stage: test
  image: node:20
  script:
    - npm ci
    - npm test

build:
  stage: build
  image: docker:24
  services: [docker:24-dind]
  script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    - docker build -t $IMAGE .
    - docker push $IMAGE

scan:
  stage: scan
  image: aquasec/trivy
  script:
    - trivy image --exit-code 1 --severity HIGH,CRITICAL $IMAGE

deploy-prod:
  stage: deploy
  image: bitnami/kubectl
  only: [main]
  environment:
    name: production
    url: https://myapp.example.com
  script:
    - kubectl config use-context myorg/gitops:prod
    - kubectl set image deployment/myapp myapp=$IMAGE
    - kubectl rollout status deployment/myapp

๐ŸŽจ CI/CD Patterns Worth Knowing

๐Ÿšฆ Blue/Green Deployment

Run two identical environments. "Green" serves traffic; deploy new version to "Blue"; flip the Service selector. Instant rollback if something breaks.

Tools: Argo Rollouts, Flagger, manual kubectl patch

๐Ÿค Canary Deployment

Send 5% of traffic to the new version. Monitor metrics. If healthy, gradually shift to 25%, 50%, 100%. If errors spike, auto-rollback.

Tools: Flagger, Argo Rollouts, Istio + Prometheus

๐ŸŒฟ Feature Flags

Deploy new code dark; toggle it on for specific users via a flag service. Decouples "deploy" from "release."

Tools: LaunchDarkly, Flagsmith, Unleash, OpenFeature

๐ŸŽญ Preview Environments

Every pull request gets its own namespace with the new code running. Reviewers click a URL to test before merging.

Tools: Jenkins X, vcluster, Okteto, Gitpod

๐Ÿ” Image Signing & Provenance

Sign images cryptographically in CI; verify signatures at admission time in K8s. Supply-chain security.

Tools: Sigstore/cosign, Notary, SLSA framework

๐Ÿ›‘ Progressive Delivery + SLO Gates

Automatically halt a rollout if error rate or latency breaches defined SLOs. The safety net for canary deploys.

Tools: Flagger + Prometheus, Argo Rollouts + Kayenta

Part 8 ยท Official Documentation