I’m going to say something that might get me uninvited from DevOps conferences: you probably don’t need Kubernetes.
Or GitOps. Or ArgoCD. Or most of the tooling that’s become the “standard” for modern software deployment.
This isn’t a hot take for clicks. It’s a conclusion I’ve reached after years of working with both enterprise-scale infrastructure and small-team deployments. The tooling that makes sense for a 200-person engineering org actively harms a solo developer or small startup.
Let me explain.
The Cargo Cult Problem
There’s a pattern I see constantly in the startup world: small teams adopting enterprise tools because that’s what “serious” companies use.
“We need Kubernetes because that’s how you do containers properly.” “We need ArgoCD because GitOps is best practice.” “We need Prometheus and Grafana because you can’t run production without proper observability.”
Here’s the thing: those tools exist to solve problems that emerge at scale. Kubernetes solves the problem of orchestrating thousands of containers across hundreds of machines. GitOps solves the problem of coordinating deployments across multiple teams with proper audit trails. Prometheus solves the problem of monitoring metrics across complex distributed systems.
If you’re a solo developer or a three-person startup, you don’t have those problems. You have different problems. And the enterprise tools don’t solve your problems—they create new ones.
The Real Costs of Complexity
Enterprise tools aren’t free. Not in money (though often that too), but in cognitive load and maintenance burden.
Learning curve. Kubernetes has a famously steep learning curve. Pods, deployments, services, ingresses, configmaps, secrets, persistent volumes, helm charts, operators… that’s weeks or months of learning before you’re productive.
Configuration overhead. A simple application that deploys with docker-compose up might need dozens of YAML files to deploy to Kubernetes. Each file is another surface for mistakes, another thing to maintain, another thing that can drift out of sync.
Debugging complexity. When something goes wrong in a Kubernetes cluster, debugging requires understanding the entire stack: container runtime, network overlay, service mesh, ingress controller, the application itself. When something goes wrong with docker-compose, you check the container logs.
Maintenance burden. Kubernetes clusters need care and feeding. Updates, security patches, node management, resource tuning. That’s time not spent building your product.
For an enterprise with a dedicated platform team, these costs are absorbed. For a solo developer, they’re a tax on everything you do.
What You Actually Need
Here’s my setup for deploying production applications:
#!/bin/bash
# deploy.sh - the entire deployment pipeline
set -e
# Run tests
./test.sh
# Build
docker build -t myapp:$VERSION .
# Push to registry
docker push registry.example.com/myapp:$VERSION
# Deploy
ssh production "cd /app && docker-compose pull && docker-compose up -d"
# Verify
sleep 10
curl -sf https://myapp.example.com/health && echo "Success" || exit 1
# Log
echo "$(date) | $VERSION" >> deployments.log
That’s it. The entire CI/CD pipeline. 15 lines of bash.
And here’s the docker-compose.yml on the server:
version: '3'
services:
app:
image: registry.example.com/myapp:latest
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgres://...
restart: always
This setup gives me:
- Consistent environments. Docker handles the “works on my machine” problem.
- One-command deployment.
./deploy.shand you’re done. - Easy rollbacks. Change the image tag in docker-compose.yml,
docker-compose up -d. - Understandable debugging. When something breaks, I know exactly where to look.
What it doesn’t give me: automatic scaling, rolling deployments across clusters, sophisticated service mesh capabilities, multi-region failover.
I don’t need any of those things. Neither do most startups.
”But What About…”
“What about scaling?”
docker-compose can scale services with docker-compose up --scale app=3. For most applications, a single beefy VPS handles more traffic than you’ll see for years. When you actually need horizontal scaling across multiple machines, Docker Swarm is dramatically simpler than Kubernetes and handles most use cases.
And here’s the real kicker: if you’re a solo developer or small startup, hitting scale problems is a good problem to have. It means you have users. At that point, you’ll have revenue and can hire someone who actually enjoys managing Kubernetes.
“What about zero-downtime deployments?”
docker-compose supports health checks. Configure them properly and you get zero-downtime deployments. For more sophisticated needs, put a load balancer in front (Nginx, Caddy, or a managed LB) and do blue-green deploys manually. Still simpler than Kubernetes.
“What about monitoring?”
For a small application:
#!/bin/bash
# monitor.sh - run via cron every 5 minutes
if ! curl -sf https://myapp.example.com/health > /dev/null; then
curl -d "App is down" ntfy.sh/my-alerts
fi
When your health check fails, you get a notification. For metrics, your cloud provider’s basic monitoring (CloudWatch, DigitalOcean Monitoring) handles CPU/memory/disk. Application-level metrics? Log them and grep when you need them.
Prometheus + Grafana is beautiful. It’s also massive overkill for an application with three endpoints and fifty users.
“What about infrastructure as code?”
Your docker-compose.yml is infrastructure as code. It’s version controlled. It defines your infrastructure. You can deploy from it reproducibly.
Terraform is great when you have complex cloud resources to manage—VPCs, IAM policies, multiple services with dependencies. For “a VPS running my application,” the marginal benefit over “I know how to set up a server” is negative.
Right-Sized Tooling
The principle I operate by: match your tooling complexity to your actual scale.
| If you are… | Consider… | Instead of… |
|---|---|---|
| Solo dev, <10k users | docker-compose + bash scripts | Kubernetes, GitOps |
| Small team, <100k users | Docker Swarm or managed containers | Self-hosted Kubernetes |
| Large team, serious scale | Yes, actually Kubernetes | - |
The threshold for “actually need Kubernetes” is higher than most people think. Plenty of substantial businesses run on simpler infrastructure.
Basecamp famously runs on a small number of well-tuned servers. Craigslist handled enormous traffic with minimal infrastructure complexity. The idea that you need enterprise tooling from day one is a myth perpetuated by people selling enterprise tooling.
The AI-Assisted Development Angle
Here’s something that’s become increasingly relevant: AI works better with simple tooling.
When I direct my AI agents to deploy an application, I don’t want them navigating the complexity of Kubernetes manifests, Helm charts, and GitOps workflows. I want them running a bash script that does something obvious.
AI can understand:
./deploy.sh production v1.2.3
AI struggles with:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: myapp
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/...
targetRevision: HEAD
path: k8s/overlays/production
destination:
server: https://kubernetes.default.svc
namespace: myapp
syncPolicy:
automated:
prune: true
selfHeal: true
The cognitive overhead that makes Kubernetes hard for humans also makes it harder for AI to work with reliably. Simple tools that do obvious things are more robust for both human and AI operators.
When You Actually Need the Complexity
I’m not saying Kubernetes is bad. It’s excellent at what it does. You should consider it when:
- You have multiple teams deploying independently and need strong isolation
- You’re running workloads that genuinely require dynamic scaling
- You have compliance requirements that demand specific deployment patterns
- You have a dedicated platform team to manage it
But notice what’s not on that list: “You want to be taken seriously as a tech company.”
Tool choice should be driven by actual problems, not aspirational identity. Using Kubernetes when docker-compose would work doesn’t make you a better engineer—it makes you someone who chose unnecessary complexity.
The Liberating Reality
Here’s what I’ve found liberating about embracing simple infrastructure:
Faster iteration. When deployment is one script that takes 30 seconds, you deploy more often. When deployment requires updating manifests, waiting for sync, and debugging why the rollout stalled, you deploy less often.
Easier debugging. When the stack is simple, problems have obvious causes. Container won’t start? Check the logs. Service unreachable? Check the port mapping. Database connection failing? Check the environment variable.
Lower cognitive load. I can hold my entire infrastructure in my head. That frees mental cycles for the actual product work.
More time building. The time I don’t spend managing Kubernetes is time I spend building features for users.
This isn’t about being anti-technology or refusing to learn. It’s about choosing the right tool for the job—and recognising that “right” depends on your context, not on what FAANG companies use.
Closing Thought
If you’re a solo developer or small startup running Kubernetes, I’m not telling you to tear it all down. Sunk costs are sunk.
But if you’re starting something new, consider whether docker-compose and bash scripts might be enough. They usually are.
The best infrastructure is the infrastructure you understand completely, that you can debug at 2am, and that stays out of your way while you build the thing that actually matters.
That’s rarely Kubernetes.
This is Part 4 of a series on AI-assisted software development. Previously: The 4 Levels of AI-Assisted Development. Next: Field Notes: The WhatsApp Listener Project.