Kubernetes in 2026. What changed since the last post here.
The last serious Kubernetes post on this blog was in January 2018 — a CI/CD piece with GitLab and Helm. A lot has changed since. This is not a comprehensive changelog, just the short list of things that changed how the job actually feels.
The control plane is not your problem
In 2018 you set up Kubernetes by running kubeadm on three VMs, hoping the etcd backup script worked, and then writing a runbook for the day a master went down. Today, on every cloud I touch, the control plane is a managed service. EKS, GKE Autopilot, AKS, DOKS, Linode LKE — you ask for a cluster, fifteen minutes later you have a cluster. The vendor handles the masters and etcd.
If you are still running the control plane yourself, you almost certainly have a reason — air-gapped environment, sovereignty, your own metal. If you don’t have a reason, stop doing it.
Helm is not the only answer
I used Helm in the 2018 post and Helm is still around (now on v3, no more Tiller, thank goodness). It is fine. But the alternatives are real now:
- Kustomize. Ships with
kubectlsince 1.14. Overlays instead of templates. If you can read YAML, you can read a Kustomize overlay. - Helmfile / Argo CD applications-of-applications / Flux Kustomizations. Whatever shape you want for managing the chart-of-charts problem.
- cdk8s. Define your manifests in TypeScript or Python. Useful if you have repeated patterns that templating handles badly.
For a simple service I default to Kustomize. For something with many parameters and consumers, Helm earns its keep.
GitOps is just how this works now
In 2018 my CI pipeline ran kubectl apply from a runner. This worked until I had three clusters and four people, at which point it started to make me nervous.
The standard pattern now is a tool — Argo CD or Flux — that lives inside the cluster and watches a Git repository. You push a commit, the controller reconciles the cluster to match. The cluster is the only thing with credentials to itself; your CI doesn’t need them.
Side effects of doing this that I didn’t expect in 2018:
- The cluster state is auditable from
git log. - Rollback is
git revert. - The disaster recovery story is “bring up a cluster, point the GitOps controller at the repo, wait.”
Node autoscaling without Cluster Autoscaler
On AWS, Karpenter replaced Cluster Autoscaler in most of my clusters. The model is different: instead of scaling node groups up and down, Karpenter looks at unscheduled pods and provisions exactly the right instance type. If your workload needs 4 vCPUs and 8 GB, you’ll get a c7i.large (or a Spot equivalent), not a node from a pre-baked pool.
The result is fewer empty nodes, faster scale-up, and a much smaller config file. Other clouds have started shipping similar things; on GKE this is roughly what Autopilot does for you.
Ingress is now Gateway API
The Ingress resource is still here and still works, but if you are starting fresh I would use the Gateway API. It separates the roles cleanly: cluster operators manage Gateway resources (think “there is a load balancer here, with this TLS, on this port”), application teams manage HTTPRoute resources (think “my /api path goes to my Service”). This was always how Ingress wanted to be, but you had to express it with annotations.
Observability: OTel everywhere
In 2018 you picked one stack — Prometheus, Datadog, New Relic — and instrumented your code for it. In 2026 you instrument for OpenTelemetry and configure where the data goes. The vendors all accept OTLP. You can keep your instrumentation when you move from one to the other.
The OpenTelemetry Collector is the routing layer. It runs as a DaemonSet, scrapes Prometheus targets, ingests OTLP from your apps, batches, transforms, exports. It is the closest thing to a default I would name today.
A few smaller things
kubectlhas built-in JSON Path and--watchand many other things.kubectl get pods -w -o widecovers a lot of needs that used to require third-party tooling.- The k9s TUI is genuinely good. If you live in a terminal, install it.
- Secrets in plain YAML are still a bad idea, but
ExternalSecrets+ a real secret manager (AWS Secrets Manager, HashiCorp Vault, GCP Secret Manager) is now an unremarkable setup. - The default container runtime is
containerd, notdocker. You almost never notice, except when you do.
That is the short list. If your mental model is the 2018 model, you can still navigate; you just look like you’re carrying a flip phone.