Quick take
Serverless wins at low, bursty traffic. Containers win at sustained load. The crossover happens sooner than most people think. I’ve run workloads on both sides of that line and the difference in cost can be 3-5x if you pick wrong.
Everyone’s doing serverless now. Every conference talk. Every blog post. Every startup pitch deck mentions Lambda like it’s a personality trait.
I get it. At Decloud we help companies sort out their cloud infrastructure, and serverless comes up in almost every conversation. Half the time it’s the right call. The other half, someone read a blog post and now their entire API runs on Lambda with 400ms cold starts and a monthly bill that makes no sense.
So here’s the honest breakdown. No hype, no anti-hype. Just the math.
The comparison nobody wants to make
People treat serverless vs containers like a religious debate. It’s not. It’s arithmetic.
| Factor | Serverless (Lambda) | Containers (ECS/Fargate) |
|---|---|---|
| Traffic < 100K req/day | Cheap. Often free tier. | Overkill. Paying for idle. |
| Traffic 100K-1M req/day | Still reasonable. Watch concurrency. | Starting to make sense. |
| Traffic > 1M req/day, steady | Expensive. Very expensive. | Clear winner on cost. |
| Bursty (0 to 10K in seconds) | Handles it natively. | Needs autoscaling config. Lag. |
| Cold start tolerance | 200-800ms typical (JVM: seconds) | Zero. Already running. |
| Max execution time | 15 minutes hard cap | No limit |
| Connection pooling | Painful. Each instance = new connection. | Normal. Pool lives with the process. |
| Deployment complexity | Low per function. High at 50+ functions. | Medium, consistent. |
| Debugging in production | Distributed tracing or suffer | SSH in if desperate. Logs are normal. |
That last row matters more than people admit.
Where serverless genuinely wins
I’ll give credit where it’s due. For certain workloads, serverless is unbeatable.
Event processing. S3 upload triggers a function, function processes the file, done. No server sitting around waiting. This is the original Lambda use case and it’s still the best one. We use this pattern at Decloud for processing customer infrastructure snapshots.
Webhooks and integrations. Glue code between services. Receives a payload, transforms it, passes it along. Runs maybe 200ms. Happens a few thousand times a day. Perfect fit. Running a container for this is like hiring a full-time employee to check the mailbox.
Cron jobs that run under 15 minutes. Cleanup tasks, report generation, health checks. A Lambda on a CloudWatch schedule is simpler than managing a cron server or scheduling containers.
Genuinely unpredictable traffic. If you can’t forecast whether you’ll get 10 requests or 10,000 in the next hour, serverless handles that gracefully. Containers need lead time to scale.
Where serverless falls apart
Here’s the part that gets me uninvited from serverless meetups.
Sustained API traffic. If your API handles steady traffic – say 500+ requests per second, consistently – you’re paying a premium for Lambda that buys you nothing. The per-invocation cost adds up fast. I’ve seen teams cut their compute bill by 60-70% by moving a stable API from Lambda to Fargate. Not a theoretical number. Actual invoices.
Anything that needs database connections. This one drives me crazy. Lambda spins up instances independently. Each one opens its own database connection. You go from 10 concurrent executions to 500 during a traffic spike and suddenly your Postgres is drowning in connections. Yes, RDS Proxy exists now. It helps. It’s also another managed service you’re paying for to solve a problem containers don’t have.
Latency-sensitive paths. Cold starts. I know, provisioned concurrency exists. But provisioned concurrency is just… running a container with extra steps. You’re paying to keep Lambda instances warm. At that point, what are you even doing?
Complex request processing. If your function needs to do three API calls, a database write, and a cache update, that 200ms function becomes 800ms. You’re paying for all that wall-clock time. A container doing the same work with persistent connections and warm caches does it in 150ms.
The real cost comparison
Let me get specific. Rough numbers for a simple API endpoint, US East, mid-2020 pricing:
1 million requests/day, 200ms average duration, 256MB memory:
- Lambda: ~$250/month (invocations + duration)
- Fargate (2 tasks, 0.5 vCPU, 1GB): ~$60/month
That’s 4x. At 5 million requests/day with the same profile, the gap widens.
10,000 requests/day, bursty, same specs:
- Lambda: ~$3/month
- Fargate (1 task minimum): ~$30/month
Flipped completely. Serverless is 10x cheaper at low volume.
The crossover point for a typical web API sits somewhere around 200K-500K requests per day, depending on duration and memory. Below that, serverless. Above that, containers. This isn’t gospel – measure your own workload – but it’s a reasonable starting point.
The real problem is lock-in
Something nobody talks about enough: at 50+ Lambda functions with API Gateway, Step Functions, SQS triggers, DynamoDB streams, and EventBridge rules, you haven’t built an application. You’ve built an AWS application. Every piece of business logic is coupled to a specific AWS service.
Containers running your own code with standard libraries? Move them to GCP, Azure, your own hardware, whatever. The portability isn’t theoretical. I’ve done it.
With Decloud we see this regularly. Companies come to us wanting to optimize or migrate, and the ones running serverless-heavy architectures have a much harder time. Not impossible. Just harder and more expensive to change.
My actual recommendation
Stop asking “should we use serverless?” and start asking “what does the traffic look like for this specific endpoint?”
- Bursty, low-volume, event-driven? Lambda. Don’t overthink it.
- Steady traffic above a few hundred requests per second? Containers.
- Mixed? Use both. Nobody said you have to pick one.
The teams that do this well treat serverless as a tool, not an architecture. They use it where the math works and containers where it doesn’t. No ideology. Just invoices.
That’s the whole secret. Look at the bill. Do the math. Deploy accordingly.