Quick take
If you have fewer than four teams and your domain boundaries are still shifting, you almost certainly don’t need microservices. You need a clean monolith and the discipline to keep it modular.
The split that cost us three months
At Dropbyke we had a Go monolith that handled bike availability, user accounts, payments, and ride tracking. It was about 40k lines, well-tested, deployed in under two minutes. It worked.
Then we decided to extract payments into its own service. The reasoning sounded right: payments are sensitive, they have different scaling characteristics, and we wanted to isolate failures. Classic microservices pitch.
What actually happened: we spent three months building the extraction. We needed a new deploy pipeline, a message contract between services, retry logic for the network boundary, and a way to keep ride state consistent when the payment service was slow or down. We went from a function call that took microseconds to a network call that introduced latency, partial failures, and a new category of bugs we had never dealt with before.
The team was four engineers. We could have spent those three months shipping features. Instead we shipped infrastructure.
The real cost of splitting early
Microservices solve an organizational problem, not a technical one. When you have multiple teams that need to ship independently without stepping on each other, services aligned to team boundaries are powerful. That’s a real benefit.
But most teams I see adopting microservices don’t have that problem. They have five to ten engineers, a shared codebase, and a product that’s still changing shape weekly. At that stage, splitting into services means:
More deploy pipelines to maintain. Each service needs its own CI, its own monitoring, its own alerting. That’s real maintenance overhead for a small team.
Distributed data pain. A query that was a simple SQL join becomes a cross-service call. A transaction that was ACID becomes an eventually consistent workflow. At Dropbyke, a single “end ride and charge user” operation went from one database transaction to a choreography of events across two services with compensating actions for failure cases. The code tripled in complexity.
Testing gets hard. A monolith test suite runs in one process. Service tests require either mocking everything (which hides real bugs) or running integration environments that need their own care. We went from go test ./... taking 30 seconds to an integration suite that took 12 minutes and broke for infrastructure reasons as often as code reasons.
Debugging gets hard. A stack trace in a monolith tells you exactly what happened. A distributed trace across services requires correlation IDs, centralized logging, and tracing infrastructure. In 2016, that tooling is still immature.
When a monolith is the right call
A monolith isn’t a dirty word. A well-structured monolith with clear package boundaries, explicit interfaces between modules, and owned data per module gives you most of the architectural benefits of services without the operational tax.
At the fintech startup we kept the backend as a single deployable for much longer than conventional wisdom suggested. Financial news ingestion, NLP processing, user-facing API, all in one app. The key was strict internal boundaries. The NLP module exposed a clean interface. The API layer never reached into ingestion internals. Data ownership was explicit even though it shared a database.
This let us move fast. A new feature that touched multiple concerns was a single PR, a single deploy, a single rollback if something went wrong. We weren’t coordinating releases across services or debugging network failures between components that used to be function calls.
The modular monolith
The pattern I keep coming back to is the modular monolith. One deployable unit, strict internal boundaries, explicit interfaces between modules. You get the code organization benefits without paying the distributed systems tax.
The key disciplines:
- Each module owns its data. No reaching into another module’s tables.
- Modules communicate through defined interfaces, not by importing each other’s internals.
- Dependencies between modules are visible and intentional.
This isn’t easy. It requires code review discipline and a team that cares about boundaries. But it’s dramatically simpler than operating a fleet of services, and it keeps the option to extract a service later when you have a concrete reason.
When you actually need services
There are legitimate reasons to split. I’ve seen three that hold up in practice:
Radically different scaling needs. If one component handles 100x the traffic of everything else, separating it can save real money and improve reliability. At Dropbyke, the bike location tracking service eventually did need to be separate because it processed GPS updates at a rate that would have required over-provisioning the entire monolith.
Independent team ownership. When you have genuinely separate teams that are blocked by coordinated releases, aligning service boundaries to team boundaries unblocks delivery. But this is a team problem, not a technology problem. If you have one team, you don’t have this problem.
Hard compliance boundaries. PCI, SOX, specific regulatory requirements that mandate isolation. These are real constraints, not aspirational architecture goals.
Notice what isn’t on the list: “because Netflix does it” or “because we might need to scale someday.” Premature optimization applied to architecture is just as wasteful as premature optimization applied to code.
If you’re going to split, do it incrementally
If you have a monolith and genuine reasons to extract a service, don’t rewrite. Extract one piece. Run it alongside the monolith. Learn everything about operating two things instead of one. Then decide if the trade-off was worth it before extracting the next piece.
The operational foundations you need are the same ones that make a monolith healthy: good logging, monitoring, deployment automation, and incident response. Build those first. They pay off regardless of your architecture.
The bottom line
Microservices are a trade-off, not an upgrade. They buy organizational independence at the cost of operational complexity. For most teams I talk to, the honest answer is: you aren’t big enough to need them yet, and your monolith isn’t the thing slowing you down.
Ship features. Keep your boundaries clean. Split when the pain of coordination is real and measured, not hypothetical. That’s the job.