One thing NATO cyber defense work hammered into me: the perimeter is a lie. Networks get breached. Insiders go rogue. A “trusted” subnet is just a subnet you haven’t caught an attacker on yet.
So when I started building the fintech startup’s infrastructure, I refused to use the castle-and-moat model. Flat internal networks with a firewall at the edge? That’s how you get lateral movement from a single compromised laptop to your production database in under four minutes. I’ve seen it happen. Not theoretically. In exercises, in real incident reports, in postmortems that never should have been necessary.
Zero trust was the alternative. Not a product we bought. A posture we adopted.
What zero trust actually means
Every request gets authenticated and authorized. Doesn’t matter if it comes from an office IP, a developer’s home Wi-Fi, or a service running in our own cluster. No one gets a free pass because they logged in once. No service gets implicit access because it sits on the same VLAN.
You design as if a breach already happened. Then you make it hard for that breach to go anywhere.
At the fintech startup, this meant shifting the security boundary from the network to identity. Users prove who they are. Services prove who they are. Devices prove they’re healthy. Policies decide what’s allowed right now — not based on some access grant from three months ago that nobody reviewed.
The pieces that matter
Never trust, always verify. Each call gets validated with fresh signals: user identity, service identity, device posture, risk context. Not “you authenticated an hour ago, good enough.” Fresh. Every time.
Assume breach. We removed implicit trust paths, encrypted all internal traffic, and put hard guardrails around lateral movement. If one service gets popped, it can’t crawl to the next.
Least privilege, enforced ruthlessly. This isn’t a checkbox on an audit form. It’s constant pruning. People and services get exactly what they need. Nothing more. And those grants expire.
Micro-segmentation made this real for us. Instead of a handful of broad network zones, we defined explicit service-to-service dependencies and enforced them. Our frontend could reach the API it depended on. It couldn’t reach the database. Period. Moving between segments required the same authentication you’d demand at the edge.
Device trust was the other critical piece. A valid user on a compromised laptop is still a threat. We treated device health as a first-class signal — managed, patched, compliant devices got deeper access. Unknown or outdated machines got restricted. Sessions expired. Risk-based challenges kicked in when behavior looked unusual.
How we made it concrete
We put an access proxy in front of every internal application. Users hit the proxy over HTTPS, it checked identity and device state, evaluated policy, then decided whether to let traffic through. No VPN needed. Same control plane for every app. Developers could work from anywhere without us losing visibility or control.
Inside the network, we used a service mesh. Services talked with authenticated identities, connections were encrypted end-to-end, and policies were enforced consistently. No team had to reinvent auth. The mesh handled it.
API gateways centralized authn/authz for external traffic, but — and this was non-negotiable — backend services still validated the identity they received. Trusting the gateway blindly would’ve just recreated a perimeter inside the network. Defeats the entire point.
Getting there without burning everything down
We didn’t flip a switch. We started by mapping what we actually had. Asset inventory, data classification, real access patterns — not what the diagrams said, but what actually happened on the wire.
Then we prioritized. Critical data first. High-risk access paths. Places where we could tighten things without rewriting the entire stack. Visibility came before enforcement. We ran in audit mode for weeks, watching what would get blocked, fixing false positives before they became outages.
Soft enforcement first. Hard enforcement only after we trusted the policies.
The cultural part was harder than the technical part, honestly. Developers had to treat service-to-service auth as normal work, not a security team annoyance. Ops had to run a more intentional network. Everyone had to understand why the extra checks existed. We explained the reasoning, automated everything we could, and made the secure path the easy path.
Where it gets ugly
Legacy systems. We had components that couldn’t speak modern auth protocols. So we put proxies in front of them and drew tighter network boundaries as compensating controls. Not elegant. Effective.
Complexity scales fast. Without policy automation, consistent tooling, and solid observability, you drown. We invested early in all three. Not optional.
Performance concerns came up constantly. Engineers worried about latency from all the extra auth checks. Valid concern. We solved it with aggressive caching, efficient token validation, and connection reuse. The overhead ended up negligible.
The real risk is developer experience. If the secure path is painful, people route around it. Every time. So we built self-service tooling, published clear patterns, and made the defaults safe. Security that fights the developer loses.
Where we ended up
Zero trust isn’t a finish line. It’s an operating model. What we got at the fintech startup was a smaller blast radius, clear visibility into who accessed what and when, and an architecture that matched how modern distributed systems actually work. Not a perimeter pretending the world hasn’t changed.
Start with identity. Tighten access. Segment traffic. Keep iterating. That’s the whole playbook.