๐ Quick Note
"Kubernetes on Docker Compose" isn't a standard combo. Docker Compose and Kubernetes are two different orchestrators. This tutorial uses kind, which runs a real Kubernetes cluster using Docker containers as nodes โ the closest thing to "K8s on Docker" on a Mac.
Part 1 ยท Step-by-Step Tutorial
Follow these steps in order. Each one builds on the previous.
Install Prerequisites
Install Homebrew, Docker Desktop, kind, and kubectl.
# Install Homebrew (if not installed) /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" # Install Docker Desktop brew install --cask docker # Install kind and kubectl brew install kind kubectl # Verify docker --version kind --version kubectl version --client
Create the Cluster Config
Create a working directory and a kind-config.yaml file.
mkdir ~/k8s-demo && cd ~/k8s-demo
kind-config.yaml:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30080
hostPort: 8080
protocol: TCP
- role: worker
- role: worker
Create the Cluster
kind create cluster --name demo --config kind-config.yaml # Verify kubectl cluster-info --context kind-demo kubectl get nodes
Create Static HTML
Create index.html:
<!DOCTYPE html> <html> <head><title>Hello from K8s</title></head> <body> <h1>Hello from Kubernetes on Mac!</h1> <p>Served by nginx in a kind cluster.</p> </body> </html>
Load HTML into a ConfigMap
kubectl create configmap html-content --from-file=index.html # Verify kubectl get configmap html-content -o yaml
Create Deployment & Service
nginx-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-static
spec:
replicas: 2
selector:
matchLabels:
app: nginx-static
template:
metadata:
labels:
app: nginx-static
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
volumes:
- name: html
configMap:
name: html-content
---
apiVersion: v1
kind: Service
metadata:
name: nginx-static
spec:
type: NodePort
selector:
app: nginx-static
ports:
- port: 80
targetPort: 80
nodePort: 30080
Apply & View
kubectl apply -f nginx-deployment.yaml kubectl get pods -w kubectl get svc nginx-static # Then open in browser: # http://localhost:8080
Cleanup
kubectl delete -f nginx-deployment.yaml kubectl delete configmap html-content kind delete cluster --name demo
Part 2 ยท Deep Explanations
What each command, file, and setting actually does โ in plain language.
Homebrew
Homebrew is macOS's package manager โ think of it as the "App Store for command-line tools."
Instead of downloading zip files and dragging apps around, you type brew install something and it handles everything.
Docker Desktop
Docker lets you run apps in "containers" โ isolated mini-environments. On Mac, Docker Desktop runs a tiny Linux VM under the hood because Docker is Linux-native. kind uses Docker to create each Kubernetes node as a container.
kind (K8s in Docker)
kind runs Kubernetes nodes as Docker containers instead of real VMs or machines. Perfect for local development
and testing. Each "node" you see in kubectl get nodes is actually a Docker container.
kubectl
kubectl (pronounced "koob-control" or "koob-cuttle") is the command-line remote control for Kubernetes. Every time you want to see, create, or delete something in your cluster, you use kubectl.
Pods
A Pod is the smallest unit in Kubernetes โ it wraps one (or more) containers. You don't usually create pods directly; you create a Deployment and it creates pods for you. If a pod dies, the Deployment makes a new one.
Deployment
A Deployment is a "blueprint + manager" for your pods. It says "I want 2 copies of this nginx container running at all times" and K8s makes sure that's always true โ even if a pod crashes or a node dies.
Service
Pods come and go, and each gets a new IP. A Service gives you one stable address that always forwards to the live pods. Think of it as a receptionist that knows which pods are currently awake.
ConfigMap
A ConfigMap stores configuration data (files, environment variables, key-value pairs) separately from your container image.
We use one here to inject our index.html into nginx without rebuilding the image.
NodePort & Port Mapping
NodePort opens a port on every cluster node (here 30080). In kind, nodes are Docker containers, so we also map container port 30080 โ Mac host port 8080 in the kind config. That's the chain that lets you visit localhost:8080.
YAML Manifests
Kubernetes is "declarative" โ you describe what you want in a YAML file and K8s figures out how to make it happen.
The --- separator lets you put multiple resources (like Deployment + Service) in one file.
Part 3 ยท Helm & Tools for Defining Kubernetes
Raw YAML works for a few files, but real apps have dozens of manifests across multiple environments. These tools solve that.
Helm โ The Package Manager for Kubernetes
Helm is to Kubernetes what apt is to Ubuntu or brew is to macOS. Instead of writing 10 raw YAML files, you write a chart โ a parameterized template bundle โ and install it with one command. Change the values file, get a different deployment. It's how 90% of the K8s ecosystem ships software.
Helm in 30 seconds
# Install Helm brew install helm # Add a chart repository helm repo add bitnami https://charts.bitnami.com/bitnami helm repo update # Install PostgreSQL in one command (no YAML writing!) helm install my-db bitnami/postgresql \ --set auth.postgresPassword=secret123 \ --set primary.persistence.size=10Gi # List what you've installed helm list # Upgrade with new values helm upgrade my-db bitnami/postgresql --set primary.persistence.size=20Gi # Rollback instantly if something breaks helm rollback my-db 1 # Uninstall everything cleanly helm uninstall my-db
Other Tools to Define K8s
Helm isn't the only option. Each of these solves a slightly different problem.
Kustomize
Overlay-based patching. Built into kubectl. No templating โ just base YAML + patches per environment.
Argo CD
GitOps controller. Watches a Git repo and auto-syncs any YAML/Helm/Kustomize changes to your cluster.
Flux CD
Another GitOps engine, CNCF graduated. Native Helm & Kustomize support, multi-tenant friendly.
Terraform
Infrastructure-as-code for everything โ cloud VMs, databases, AND K8s resources via the Kubernetes provider.
Pulumi
Write K8s manifests in real programming languages โ TypeScript, Python, Go, C#. No YAML at all.
Jsonnet / CUE
Data-templating languages favored by Google/Grafana Labs. More type-safe than Helm's Go templates.
Skaffold
Google's dev loop tool โ builds images, deploys to K8s, and hot-reloads on file changes. Great with kind.
Tilt
Microservice dev environment โ Tiltfiles define multi-service setups with one command. Live UI included.
Crossplane
Manage cloud infra (RDS, S3, GCS) as K8s resources. "Everything is a CRD" philosophy.
Part 4 ยท Real Scenarios
Two worked examples: a simple HA static site, and a production-style 3-tier app.
Simple HTML Site with Failover
A static HTML page that keeps serving even if a pod (or whole node) dies.
The failover ingredients
- replicas: 3 โ three pods, so losing one doesn't interrupt service
- podAntiAffinity โ spread pods across different nodes
- livenessProbe โ restart stuck containers automatically
- readinessProbe โ don't send traffic to pods that aren't ready
- PodDisruptionBudget โ guarantee at least 2 pods stay up during maintenance
nginx-ha.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ha
spec:
replicas: 3
selector:
matchLabels:
app: nginx-ha
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: nginx-ha
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: nginx-ha
topologyKey: kubernetes.io/hostname
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 200m
memory: 128Mi
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 2
periodSeconds: 5
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
volumes:
- name: html
configMap:
name: html-content
---
apiVersion: v1
kind: Service
metadata:
name: nginx-ha
spec:
type: NodePort
selector:
app: nginx-ha
ports:
- port: 80
targetPort: 80
nodePort: 30080
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: nginx-ha-pdb
spec:
minAvailable: 2
selector:
matchLabels:
app: nginx-ha
Try the failover yourself
kubectl apply -f nginx-ha.yaml # Verify 3 pods running kubectl get pods -l app=nginx-ha # Simulate a pod crash โ deployment auto-replaces it kubectl delete pod -l app=nginx-ha --field-selector=status.phase=Running | head -n1 # Watch it recover kubectl get pods -l app=nginx-ha -w # Test with continuous curl while killing pods while true; do curl -s http://localhost:8080 | grep -o "Hello"; sleep 0.5; done
3-Tier App: Frontend + API + Database
A realistic production-style setup: React/Vue frontend, Node/Python API, PostgreSQL database, all wired together.
Architecture at a glance
Internet
โ
โผ
[ Ingress ] โ TLS termination, routing
โ
โโโโ / โ [ Frontend Deployment ] (nginx + built SPA, 3 replicas)
โ
โโโโ /api/* โ [ API Deployment ] (Node/Python, 3 replicas)
โ
โผ
[ Postgres Service ]
โ
โผ
[ StatefulSet ] (1 primary, persistent volume)
โ
โผ
[ PersistentVolumeClaim ] (10 GiB disk)
Plus: [ Secret (db creds) ] [ ConfigMap (api config) ] [ HPA (autoscale) ]
The pieces you'd deploy
Excerpted manifests (full file is ~300 lines โ the key parts):
# --- DATABASE (StatefulSet for stable identity + persistent disk) ---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 1
selector:
matchLabels: { app: postgres }
template:
metadata:
labels: { app: postgres }
spec:
containers:
- name: postgres
image: postgres:16-alpine
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: appdb
- name: POSTGRES_USER
valueFrom:
secretKeyRef: { name: db-creds, key: username }
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef: { name: db-creds, key: password }
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata: { name: data }
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests: { storage: 10Gi }
---
# --- API Deployment ---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
spec:
replicas: 3
selector:
matchLabels: { app: api }
template:
metadata:
labels: { app: api }
spec:
containers:
- name: api
image: myorg/api:v1.2.0
ports:
- containerPort: 3000
env:
- name: DATABASE_URL
value: "postgres://$(DB_USER):$(DB_PASS)@postgres:5432/appdb"
- name: DB_USER
valueFrom:
secretKeyRef: { name: db-creds, key: username }
- name: DB_PASS
valueFrom:
secretKeyRef: { name: db-creds, key: password }
readinessProbe:
httpGet: { path: /health, port: 3000 }
livenessProbe:
httpGet: { path: /health, port: 3000 }
initialDelaySeconds: 15
---
# --- FRONTEND Deployment ---
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 3
selector:
matchLabels: { app: frontend }
template:
metadata:
labels: { app: frontend }
spec:
containers:
- name: web
image: myorg/frontend:v1.2.0
ports:
- containerPort: 80
---
# --- INGRESS (routes / to frontend, /api/* to api) ---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myapp.local
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api
port: { number: 3000 }
- path: /
pathType: Prefix
backend:
service:
name: frontend
port: { number: 80 }
---
# --- HPA autoscaling for API ---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: api-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: api
minReplicas: 3
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
The Helm version (much shorter)
# Instead of 300 lines of YAML, use a Helm chart: helm repo add bitnami https://charts.bitnami.com/bitnami # Install Postgres helm install db bitnami/postgresql \ --set auth.database=appdb \ --set primary.persistence.size=10Gi # Install your own chart helm install myapp ./charts/myapp \ --values values-prod.yaml \ --set api.image.tag=v1.2.0 \ --set frontend.image.tag=v1.2.0
Part 6 ยท Observability, Security & Production Readiness
Deploying an app is step one. Keeping it healthy, secure, and observable is where real operations begin.
"You can't fix what you can't see"
Observability is the trio of metrics (numbers over time), logs (what happened), and traces (how a request flowed). Security is RBAC, NetworkPolicies, and Secrets. Production-readiness is resource limits, autoscaling, backups, and health checks. This section covers all of it.
๐ญ Observability Stack
Prometheus
The de-facto K8s metrics database. Pull-based: it scrapes /metrics endpoints from your pods every 15s and stores time-series data.
Grafana
Visualization layer on top of Prometheus/Loki/etc. Dashboards, alerts, annotations. The "face" of most observability stacks.
Loki
"Prometheus for logs." Indexes only labels, not content โ so it's dramatically cheaper than Elasticsearch.
OpenTelemetry
Vendor-neutral standard for metrics, logs, and traces. Instrument once, send anywhere (Jaeger, Tempo, Datadog, etc).
Jaeger / Tempo
Distributed tracing backends. See how a single request flows across 20 microservices and where it slowed down.
Alertmanager
Routes Prometheus alerts to Slack, PagerDuty, email, etc. Groups, deduplicates, and silences alerts intelligently.
Install the full stack in your kind cluster
The kube-prometheus-stack Helm chart gives you Prometheus, Grafana, Alertmanager, node-exporter, and a ton of pre-built dashboards in one command.
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update # Install everything in a monitoring namespace helm install kps prometheus-community/kube-prometheus-stack \ --namespace monitoring \ --create-namespace # Port-forward Grafana (default: admin / prom-operator) kubectl port-forward -n monitoring svc/kps-grafana 3000:80 # Port-forward Prometheus kubectl port-forward -n monitoring svc/kps-kube-prometheus-stack-prometheus 9090:9090 # Add Loki for logs helm install loki grafana/loki-stack --namespace monitoring
๐ Security Basics
RBAC (Role-Based Access Control)
Controls who can do what in your cluster. Users, groups, and ServiceAccounts get bound to Roles (namespaced) or ClusterRoles (cluster-wide). Principle of least privilege.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: default
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
NetworkPolicy
Pod-level firewall rules. By default, every pod can talk to every other pod โ NetworkPolicies let you say "only the API pods can reach the DB pod."
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: db-allow-api
spec:
podSelector:
matchLabels: { app: postgres }
ingress:
- from:
- podSelector:
matchLabels: { app: api }
Secrets & External Secrets
K8s Secrets store passwords/tokens (base64-encoded, not encrypted by default!). In production, pair with Sealed Secrets, External Secrets Operator, or HashiCorp Vault.
kubectl create secret generic db-creds \ --from-literal=username=admin \ --from-literal=password='s3cret!' # Better: External Secrets Operator pulls from Vault/AWS SM kubectl apply -f external-secret.yaml
Falco & OPA Gatekeeper
Runtime security (Falco detects suspicious syscalls) and policy enforcement (Gatekeeper blocks non-compliant resources at admission time). CNCF graduated projects.
# Install Falco for runtime threat detection helm install falco falcosecurity/falco \ --namespace falco --create-namespace # Install OPA Gatekeeper for admission policies helm install gatekeeper gatekeeper/gatekeeper \ --namespace gatekeeper-system --create-namespace
โ Production Readiness Checklist
๐งฑ Reliability
- โResource requests & limits on every container (no noisy neighbors)
- โLiveness & readiness probes on every pod
- โPodDisruptionBudget to survive node drains
- โHPA (Horizontal Pod Autoscaler) for traffic spikes
- โMulti-AZ node pools for disaster recovery
- โBackups (Velero) for cluster state + PVCs
๐ Security & Ops
- โNon-root containers + read-only root filesystem
- โImage scanning (Trivy, Snyk) in CI pipeline
- โNetworkPolicies to limit pod-to-pod traffic
- โRBAC with least-privilege ServiceAccounts
- โSecrets from Vault/External Secrets, not Git
- โCentralized logging + metrics + alerts
Part 7 ยท Rancher โ Kubernetes Management Platform
Once you have more than one cluster (dev, staging, prod, edge locations...), managing them with just kubectl gets painful. Rancher solves that.
Rancher by SUSE
Rancher is an open-source multi-cluster Kubernetes management platform. Think of it as "mission control" for all your K8s clusters โ whether they're in AWS, GCP, Azure, on-prem VMware, bare metal, or edge devices. One web UI, one auth system, one policy layer, all your clusters.
What Rancher Actually Does
Multi-Cluster Management
Import existing clusters (EKS, GKE, AKS, kind, k3s, anything) or provision new ones. Single pane of glass for all of them.
Centralized Auth & RBAC
Plug in Active Directory, LDAP, SAML, GitHub, Okta โ once. Users get consistent RBAC across every cluster.
App Catalog
Curated Helm chart library with a one-click install UI. Internal teams can publish private charts for their apps.
Fleet (GitOps at Scale)
Rancher's built-in GitOps engine. Push a change to Git, Fleet rolls it out to 100+ clusters with group targeting.
Built-in Monitoring
One-click install of Prometheus + Grafana + Alertmanager, pre-configured and integrated with Rancher's UI.
CLI + Web UI + API
kubectl still works perfectly. Rancher UI is for visibility and day-2 ops. Full REST API for automation.
The Rancher Family of Projects
SUSE (who acquired Rancher Labs in 2020) maintains several related projects that often get confused. Here's the map:
Rancher (Manager)
The multi-cluster management platform itself. Runs on any K8s cluster and manages all the others.
k3s
Lightweight certified K8s distribution. A single 60MB binary. Runs on Raspberry Pi, IoT, edge, and embedded systems.
RKE2 (Rancher Kubernetes Engine 2)
Security-focused, FIPS-compliant K8s distribution for government/regulated environments. Successor to original RKE.
Fleet
GitOps engine built into Rancher, but also usable standalone. Designed to manage 1,000,000+ clusters from one Git repo.
Longhorn
Cloud-native distributed block storage for K8s. Snapshots, backups, replication โ all via CRDs and a slick UI.
NeuVector
Container security platform โ runtime protection, vulnerability scanning, compliance, zero-trust network segmentation.
๐ Install Rancher in your kind cluster
You can run Rancher locally to try it out. Here's the fastest path using Helm:
# Add the Rancher Helm repo helm repo add rancher-latest https://releases.rancher.com/server-charts/latest helm repo update # cert-manager is required (Rancher uses it for TLS) helm repo add jetstack https://charts.jetstack.io helm install cert-manager jetstack/cert-manager \ --namespace cert-manager --create-namespace \ --set installCRDs=true # Create the namespace for Rancher kubectl create namespace cattle-system # Install Rancher (self-signed cert for local use) helm install rancher rancher-latest/rancher \ --namespace cattle-system \ --set hostname=rancher.localhost \ --set replicas=1 \ --set bootstrapPassword=admin # Wait for it to come up kubectl -n cattle-system rollout status deploy/rancher # Port-forward and open in browser kubectl port-forward -n cattle-system svc/rancher 8443:443 # Then visit: https://rancher.localhost:8443
Alternative: single-container Docker install (fastest for a quick look):
docker run -d --restart=unless-stopped \ -p 80:80 -p 443:443 \ --privileged \ rancher/rancher:latest # Then browse to https://localhost and grab the bootstrap password: docker logs <container-id> 2>&1 | grep "Bootstrap Password:"
Real-World Use Cases for Rancher
๐ฌ Retail & Edge Computing
Companies like Walmart and Lowe's run K8s clusters in every store (point-of-sale, inventory, signage). Rancher + k3s manages thousands of these edge clusters from HQ with Fleet pushing updates overnight.
๐ฆ Regulated Enterprises
Banks and insurance firms use Rancher + RKE2 on-prem because they can't (or won't) put data in public clouds. Rancher gives them a cloud-like UX on bare metal.
๐ฅ Healthcare & Life Sciences
Hospital systems run HIPAA-compliant workloads across on-prem and cloud. Rancher's unified RBAC and audit logging simplify compliance reporting across heterogeneous clusters.
๐ Platform Engineering Teams
Internal platform teams at mid-to-large companies use Rancher as their "internal developer portal." Dev teams get project-scoped namespaces with self-service access via the Rancher UI.
๐ Industrial IoT
Factories, wind farms, oil rigs. k3s on the ruggedized hardware, Rancher back at HQ. Deutsche Telekom, Siemens, and various manufacturers use this pattern.
๐ Multi-Cloud Strategy
Companies hedging against cloud vendor lock-in run workloads across AWS, GCP, and Azure simultaneously. Rancher abstracts the differences so developers don't need to care which cloud a pod lands on.
Rancher vs. Alternatives
| Tool | Multi-Cluster | Open Source | Best For |
|---|---|---|---|
| Rancher | โ Native | โ Apache 2.0 | Hybrid/multi-cloud, edge, on-prem |
| OpenShift | โ (via ACM) | โ ๏ธ Open Core | Red Hat enterprise shops |
| Google Anthos | โ Native | โ Commercial | GCP-centric multi-cloud |
| VMware Tanzu | โ (via TMC) | โ Commercial | VMware vSphere environments |
| Portainer | โ | โ ๏ธ Open Core | Smaller teams, Docker + K8s |
| Lens IDE | โ (desktop) | โ MIT | Individual developers |
Part 9 ยท CI/CD Pipelines & The Complete Picture
How code goes from a developer's laptop to running pods in production โ automatically.
From git push to production pods
CI/CD (Continuous Integration / Continuous Delivery) is the bridge between "code on a laptop" and "running in the cluster." In Kubernetes-world, there are two main patterns: push-based CI/CD (pipeline runs kubectl/Helm to deploy) and GitOps (pipeline commits to a Git repo; a controller in the cluster syncs from Git). GitOps has become the dominant pattern for production K8s.
๐ฏ The Big Picture โ How Everything Fits Together
Every tool in this tutorial (Parts 1โ8) has a place in the end-to-end flow. Here's how they connect:
๐ ๏ธ CI/CD Tools for Kubernetes
GitHub Actions
YAML workflows in .github/workflows/. Free for public repos, generous free tier for private. Huge marketplace of pre-built actions.
GitLab CI/CD
Built into GitLab. One .gitlab-ci.yml file. Native K8s integration with auto-DevOps feature that infers pipelines.
Tekton
K8s-native CI/CD. Every step is a pod. Tasks and Pipelines are CRDs. CNCF graduated. The engine behind OpenShift Pipelines and Jenkins X.
Jenkins X
Opinionated cloud-native successor to Jenkins, built on Tekton + Argo CD. Full GitOps workflow out of the box.
Argo Workflows
K8s-native workflow engine. Sibling to Argo CD. Great for CI pipelines, ML training, data processing โ anything DAG-shaped.
CircleCI
SaaS CI known for speed and developer experience. First-class K8s deployment support via orbs (reusable config packages).
Drone CI
Simple, container-native CI. Every step runs in a Docker container. Self-hostable and lightweight. Owned by Harness now.
Buildkite
Hybrid: SaaS control plane + self-hosted agents. Popular at Shopify, Airbnb, Slack for running builds on their own infrastructure.
Harness
Commercial continuous delivery platform with strong progressive delivery, rollback automation, and cloud cost management.
๐ Real Pipeline Examples
๐ GitHub Actions: Full Build โ Push โ GitOps Update
.github/workflows/deploy.yml
name: Build and Deploy
on:
push:
branches: [main]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm ci
- run: npm test
build-and-push:
needs: test
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v4
- uses: docker/setup-buildx-action@v3
- uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract version
id: meta
run: echo "version=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
- name: Security scan with Trivy
uses: aquasecurity/trivy-action@master
with:
image-ref: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ steps.meta.outputs.version }}
exit-code: '1'
severity: 'CRITICAL,HIGH'
- uses: docker/build-push-action@v5
with:
context: .
push: true
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ steps.meta.outputs.version }}
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
update-gitops:
needs: build-and-push
runs-on: ubuntu-latest
steps:
- name: Checkout GitOps repo
uses: actions/checkout@v4
with:
repository: myorg/gitops-manifests
token: ${{ secrets.GITOPS_TOKEN }}
- name: Update image tag in Helm values
run: |
sed -i "s|tag: .*|tag: ${{ needs.build-and-push.outputs.version }}|" \
apps/myapp/values-prod.yaml
- name: Commit and push
run: |
git config user.name "ci-bot"
git config user.email "[email protected]"
git add .
git commit -m "chore: bump myapp to ${{ needs.build-and-push.outputs.version }}"
git push
# Argo CD will now sync this change to the cluster automatically
๐ฏ Tekton: K8s-Native Pipeline
Tekton runs inside your K8s cluster โ each Task is a pod.
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: build-and-deploy
spec:
params:
- name: repo-url
- name: image-name
workspaces:
- name: shared-data
tasks:
- name: fetch-source
taskRef:
name: git-clone
workspaces:
- name: output
workspace: shared-data
params:
- name: url
value: $(params.repo-url)
- name: run-tests
runAfter: [fetch-source]
taskRef:
name: npm-test
workspaces:
- name: source
workspace: shared-data
- name: build-image
runAfter: [run-tests]
taskRef:
name: kaniko # builds images without Docker daemon
params:
- name: IMAGE
value: $(params.image-name)
workspaces:
- name: source
workspace: shared-data
- name: deploy
runAfter: [build-image]
taskRef:
name: kubernetes-actions
params:
- name: script
value: |
kubectl set image deployment/myapp \
myapp=$(params.image-name)
---
# Trigger it with:
# tkn pipeline start build-and-deploy \
# -p repo-url=https://github.com/me/app \
# -p image-name=ghcr.io/me/app:v1 \
# -w name=shared-data,claimName=ws-pvc
๐คต Jenkins X: Preview Environments per PR
Jenkins X auto-generates a preview environment for every pull request. jenkins-x.yml:
buildPack: none
pipelineConfig:
pipelines:
pullRequest:
pipeline:
stages:
- name: ci
steps:
- name: build
image: golang:1.21
command: go build ./...
- name: test
command: go test ./...
- name: build-image
command: /kaniko/executor --dockerfile=Dockerfile
release:
pipeline:
stages:
- name: release
steps:
- name: promote
# Jenkins X auto-promotes to staging
# then waits for manual approval to prod
command: jx promote --version $VERSION --env staging
Every PR gets its own namespace with a live URL to click and test โ SREs call it "PR-driven development."
๐ฆ GitLab CI: Multi-stage with K8s Deploy
stages:
- test
- build
- scan
- deploy
variables:
IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
test:
stage: test
image: node:20
script:
- npm ci
- npm test
build:
stage: build
image: docker:24
services: [docker:24-dind]
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $IMAGE .
- docker push $IMAGE
scan:
stage: scan
image: aquasec/trivy
script:
- trivy image --exit-code 1 --severity HIGH,CRITICAL $IMAGE
deploy-prod:
stage: deploy
image: bitnami/kubectl
only: [main]
environment:
name: production
url: https://myapp.example.com
script:
- kubectl config use-context myorg/gitops:prod
- kubectl set image deployment/myapp myapp=$IMAGE
- kubectl rollout status deployment/myapp
๐จ CI/CD Patterns Worth Knowing
๐ฆ Blue/Green Deployment
Run two identical environments. "Green" serves traffic; deploy new version to "Blue"; flip the Service selector. Instant rollback if something breaks.
Tools: Argo Rollouts, Flagger, manual kubectl patch
๐ค Canary Deployment
Send 5% of traffic to the new version. Monitor metrics. If healthy, gradually shift to 25%, 50%, 100%. If errors spike, auto-rollback.
Tools: Flagger, Argo Rollouts, Istio + Prometheus
๐ฟ Feature Flags
Deploy new code dark; toggle it on for specific users via a flag service. Decouples "deploy" from "release."
Tools: LaunchDarkly, Flagsmith, Unleash, OpenFeature
๐ญ Preview Environments
Every pull request gets its own namespace with the new code running. Reviewers click a URL to test before merging.
Tools: Jenkins X, vcluster, Okteto, Gitpod
๐ Image Signing & Provenance
Sign images cryptographically in CI; verify signatures at admission time in K8s. Supply-chain security.
Tools: Sigstore/cosign, Notary, SLSA framework
๐ Progressive Delivery + SLO Gates
Automatically halt a rollout if error rate or latency breaches defined SLOs. The safety net for canary deploys.
Tools: Flagger + Prometheus, Argo Rollouts + Kayenta
Part 8 ยท Official Documentation
Helm Docs
Official Helm documentation and guides.
Artifact Hub
Browse 10,000+ Helm charts ready to install.
Kustomize
Built-in kubectl overlay tool.
Argo CD
Declarative GitOps for Kubernetes.
Kubernetes Docs
The official K8s documentation home.
kind Quick Start
Install and use kind for local clusters.
Docker Desktop (Mac)
Install guide for macOS.
kubectl Cheat Sheet
Essential kubectl commands.
nginx Docker Image
Official nginx image on Docker Hub.
Homebrew
macOS package manager.
Rancher Docs
Multi-cluster K8s management platform.
k3s
Lightweight K8s for edge and IoT.
Prometheus
Metrics and monitoring for K8s.
Grafana
Dashboards, alerts, and visualization.
Longhorn
Cloud-native storage for K8s (SUSE).