You probably don’t need a serverless database.
I know. DynamoDB is cool. PlanetScale just launched and the developer experience looks great. Fauna promises global consistency over HTTP. Aurora Serverless scales to zero. It’s a good time to be a database vendor.
But I talk to teams every week, and most of them have the same setup: a web app with moderate traffic, a handful of services, and access patterns that are still changing. For that – and that describes the vast majority of projects – Postgres with PgBouncer in front of it is the right choice. It has been for years.
The serverless database pitch
The promise is simple. No connection management headaches. Scales up and down automatically. Pay for what you use. No patching, no failover to think about.
In practice, every one of these has caveats.
“Scales automatically” means Aurora Serverless v1 scales in steps with noticeable resume latency when it wakes from auto-pause. I watched a demo environment take 25 seconds to respond to the first query after an idle period. That’s not a database. That’s a nap.
“No connection management” means DynamoDB, which works great – until you realize you need to design your data model entirely around access patterns you haven’t fully figured out yet. Good luck refactoring that later.
“Pay for what you use” means unpredictable costs. I’ve seen DynamoDB bills spike 5x in a month because of a new feature that hit a secondary index harder than expected. With Postgres on a fixed instance, your bill is your bill.
When they actually make sense
DynamoDB is genuinely excellent if your access patterns are known and stable. Key-value lookups at scale, session storage, event logs with predictable queries. It’s fast, it’s cheap per-request, and the operational story is unbeatable. But you need to know your query patterns before you start. If your product is still evolving, DynamoDB’s rigidity becomes a liability.
Fauna is interesting if you need multi-region writes with strong consistency and you’re willing to learn a new query language. That’s a narrow use case. Most teams don’t need global writes. They need a database that works.
PlanetScale has a genuinely good developer workflow. Schema branching and non-locking migrations are real improvements over raw MySQL. But it’s in beta, it’s built on Vitess (so not all MySQL features work), and you still have connection limits.
The Postgres argument
Postgres handles relational queries, JSON documents, full-text search, and geospatial data. It has battle-tested replication. The ecosystem is enormous. The tooling is mature. Every cloud provider offers a managed version.
For serverless compute specifically, the connection problem is real. Lambda functions spinning up hundreds of connections will kill a Postgres instance. But PgBouncer or RDS Proxy solve this. It’s one more component, but it’s a well-understood one.
The thing about Postgres is that it’s boring. And boring is underrated. You can find answers to Postgres questions on Stack Overflow from 2009 that are still correct. Try that with Fauna’s FQL.
My advice
If you’re building a new project and you’re not sure about your access patterns: Postgres. If you’re running serverless functions and worried about connections: Postgres with a connection pooler. If you have a specific, well-understood, high-scale workload with stable access patterns: okay, look at DynamoDB.
Everything else is probably premature optimization disguised as architecture.