Quick take
Stop. You probably don’t need microservices. If you do, the strangler pattern is the only sane approach, your shared database is the real problem, and any team that can’t operate a healthy monolith will absolutely drown in a distributed system.
I’ve been building the backend for Decloud in Go since joining EF earlier this year. Before that I ran engineering at a fintech startup and Dropbyke. Across all three, the microservices question came up. In two of those cases, the correct answer was “not yet.” Here’s what I’ve learned about knowing the difference.
Do you actually have a monolith problem?
Most teams that want microservices have a process problem dressed up as an architecture problem. Slow deploys, blocked releases, painful coordination – these feel like monolith issues, but usually they’re symptoms of missing automation, weak tests, or unclear ownership.
Ask yourself:
- Are multiple teams genuinely blocked by shared release cycles? Not annoyed. Blocked.
- Do different parts of your system need fundamentally different scaling or uptime guarantees?
- Do clear product boundaries already exist in your codebase, or are you hoping microservices will create them?
If you answered “no” to most of these, you don’t need microservices. You need a better monolith. Invest in CI/CD, write tests, add feature flags, and stop doing synchronized deploys.
The uncomfortable prerequisite
Here’s the part nobody wants to hear. Before you can successfully run microservices, your monolith needs to already be healthy. Specifically:
- Automated tests that run fast and actually catch regressions
- Deploys measured in minutes, not hours
- Centralized logging and alerting that someone actually looks at
- Clear module boundaries in the codebase
- Feature flags for safe releases
If you can’t do these things with one service, you won’t magically do them with twelve. Microservices multiply your operational surface area. Every gap becomes a canyon.
At the fintech startup we had a monolith handling market data, user portfolios, and news aggregation. The temptation to split was strong. But our deploy pipeline was slow and our test coverage had gaps. Splitting would have made both problems worse. We fixed the fundamentals first. Most of the pressure to split evaporated once deploys were fast and modules were cleanly separated.
When it’s actually time
Sometimes it’s genuinely time. At Decloud, we’re building a cloud infrastructure product where the billing service has completely different scaling characteristics than the provisioning engine. Billing needs to be rock-solid and auditable. Provisioning needs to be fast and horizontally scalable. Different teams own them. That’s a real reason to split.
The strangler pattern is the only approach I’d recommend:
Phase 1: All traffic hits the monolith
Phase 2: /billing routes to billing service, everything else stays
Phase 3: Most traffic hits services, small core remains
You extract one domain at a time. The rest stays stable. You can reverse any step. Progress is visible. This isn’t glamorous work. It takes months. But it’s the kind of work that actually ships.
Pick your first extraction carefully
Your first service should be boring. Not your most critical business logic – that’s your worst option. Pick something that:
- Has a small API surface
- Changes frequently (so you see the benefits fast)
- Has a clear owner
- Has obvious scaling needs separate from the rest
At Dropbyke, if we had gone the microservices route, the GPS tracking ingestion pipeline would have been the obvious first candidate. High throughput, clear boundary, independent scaling needs, and a single team owned it. The user management system? Terrible candidate. It touched everything.
The database is the real problem
Everybody focuses on splitting code. The hard part is splitting data.
Shared databases are what turn “microservices” into a distributed monolith. If two services read from the same table, you haven’t decoupled anything. You’ve just added network hops to your coupling.
Before you extract a service:
- Map every table to its consumers. All of them.
- Assign a single owner per dataset.
- Expose shared data through an API, not a shared connection string.
- Run dual-writes temporarily during migration. Emphasis on temporarily.
A transitional read view can buy you time:
CREATE VIEW payments_order_read AS
SELECT id, total_cents, currency, user_id, status
FROM orders
WHERE status IN ('pending_payment', 'paid');
But set a date to kill it. Transitional things that don’t have end dates become permanent things.
Integration: keep it boring
For service-to-service communication, my strong preference is gRPC for synchronous calls and a durable message queue for async. In Go, the gRPC tooling is excellent and the generated clients mean one less thing to get wrong.
A few rules:
- Every sync call needs a timeout, a retry budget, and a circuit breaker. No exceptions.
- Async handlers must be idempotent. You will get duplicate messages. Plan for it.
- Request IDs for tracing. Everywhere. Non-negotiable.
Skip the complex choreography patterns until your observability is genuinely mature. Start with simple orchestration. You can get clever later. You probably won’t need to.
How you know it worked
You’ll know the migration is working when deployment frequency goes up per service, not when you have more services. More services with the same deploy cadence just means you added complexity for free.
Track lead time from commit to production. Track how often deploys fail and need rollback. Track incidents caused by cross-service dependencies. If those numbers aren’t improving, the migration isn’t helping regardless of how clean the architecture diagram looks.
Microservices can genuinely unlock velocity when the conditions are right. But “our monolith feels messy” isn’t one of those conditions. Fix your house before you build an addition.