Quick take
Zero trust means “stop trusting the network.” Every request gets authenticated and authorized regardless of where it comes from. Implementation order: identity first (MFA + service identity), then segmentation (default deny between services), then continuous verification (device posture, context signals). Most companies fail by buying a product instead of changing their architecture.
I’ve seen zero trust from two very different angles. My NATO background involved defense systems where “trust nothing” wasn’t a buzzword – it was the baseline assumption. Everything was compartmentalized. Access was explicit, scoped, and audited. You didn’t get access to a system because you were on the right network. You got access because you had the right clearance, the right need-to-know, and the right authentication.
Fast forward to working at a major telecom in 2021. They had a corporate VPN and the assumption that anything inside the VPN was trusted. Thousands of employees, contractors, and partners – all on the same flat network. When I asked about lateral movement controls, the answer was basically “we’ve firewalls at the perimeter.” The perimeter. In 2021. With half the workforce remote.
That gap between defense-grade security thinking and typical enterprise reality is where zero trust lives.
What zero trust actually means
Strip away the vendor marketing and zero trust is four ideas:
Verify explicitly. Every request is authenticated and authorized. Not just at the front door – at every service boundary. A valid network connection isn’t a valid credential.
Least privilege. Access is scoped to what you need, when you need it. Privileged access is temporary, auditable, and rare. No permanent admin tokens sitting in CI pipelines.
Assume breach. Design your architecture so that a compromised service or endpoint has limited blast radius. If an attacker gets into one service, they shouldn’t be able to reach everything.
No implicit trust. Internal traffic isn’t automatically trusted. The network is a transport layer, not a security boundary.
NIST SP 800-207 formalized this in 2020. It’s worth reading. The core idea: access decisions are policy-driven and tied to identity, device posture, and context – not IP addresses.
Identity is the control plane
This is where implementation starts. Everything else depends on knowing who or what is making a request.
For humans: MFA everywhere. Non-negotiable. Ideally FIDO2/WebAuthn for phishing resistance. At minimum, app-based TOTP. SMS is better than nothing but barely. Centralize on one SSO provider so policy changes propagate everywhere.
At one telecom, we found 14 different authentication systems. Some services used LDAP directly. Some had their own user databases. Two services still accepted basic auth over internal HTTP. Consolidating to a single identity provider took months but it was the foundation for everything else.
For services: Every service gets an identity. Short-lived credentials. Mutual TLS between services where possible. OAuth 2.0 / OIDC for API-to-API access. No long-lived static tokens.
The pattern I’ve been recommending:
Client -> Identity Provider -> Short-lived JWT -> Service A
Service A -> mTLS with SPIFFE identity -> Service B
Service B -> IAM role (scoped) -> Database
Each hop has its own authentication. Each credential is scoped and temporary. If Service A is compromised, the attacker gets Service A’s permissions – not Service B’s database credentials.
Segmentation: default deny
The network architecture piece that makes zero trust tangible. Every service-to-service connection is explicitly allowed. Everything else is denied.
In Kubernetes, this starts with NetworkPolicies:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: orders-service
spec:
podSelector:
matchLabels:
app: orders
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: api-gateway
ports:
- port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: postgres
ports:
- port: 5432
The orders service can only receive traffic from the API gateway and can only talk to its own database. Nothing else. If an attacker compromises the orders service, they can’t pivot to the user service, the payment service, or anything else on the network.
This is basic stuff but I’ve seen it absent at companies with hundreds of engineers. The excuse is always “we’ll add network policies later.” Later never comes.
If you’re running a service mesh (Istio, Linkerd), you get mTLS and authorization policies on top of network segmentation. The mesh handles the identity and encryption at the transport layer. Your services don’t need to implement TLS themselves.
Device posture
With remote work as the default now, you can’t assume devices are on a managed network. Device trust needs to be part of the access decision.
Signals that matter:
- Is this a managed/enrolled device?
- Is the OS patched?
- Is disk encryption enabled?
- Is EDR running?
An unmanaged device connecting from an unusual location at 3am should get constrained access at best. Not blocked entirely – that breaks usability – but restricted to lower-sensitivity resources until additional verification happens.
At that telecom, we implemented tiered access. Managed devices with current patches got full access. Unmanaged devices got read-only access to a subset of tools. Unknown devices got nothing except the enrollment portal.
The implementation path
This is how I recommend rolling it out. Four phases, roughly in order:
Phase 1: Inventory and visibility. You can’t secure what you don’t know about. Map your identities (human and service), your assets, and your data flows. Turn on logging for authentication and authorization decisions. This alone will surface problems you didn’t know you had.
Phase 2: Identity. Enforce MFA for all human access. Standardize on SSO. Establish service identities with short-lived credentials. Kill long-lived static tokens.
Phase 3: Segmentation. Implement default deny between services. Start with your most sensitive systems (payments, PII, auth) and expand outward. Network policies in Kubernetes, security groups in AWS, or service mesh authorization policies.
Phase 4: Continuous verification. Add device posture to access decisions. Implement risk-based re-authentication. Monitor for anomalous behavior. This phase never really ends – it’s ongoing tuning.
What companies get wrong
Buying a product. Zero trust is an architecture, not a SKU. I’ve seen companies spend millions on a “zero trust platform” and still have flat networks, long-lived credentials, and no service-to-service authentication. The product doesn’t help if the architecture doesn’t change.
Ignoring legacy. The oldest, crustiest systems are often the most sensitive. Leaving them outside the zero trust model because they “can’t support it” means your biggest risk is unprotected.
Breaking developer workflows. If the security controls are so painful that developers route around them – hardcoding credentials, disabling MFA on service accounts, punching holes in network policies – you’ve made things worse, not better. Security that people circumvent is security theater.
VPN as a crutch. A VPN puts you on the network. That’s it. It’s not authentication. It’s not authorization. It’s not segmentation. I still see organizations where VPN access grants implicit trust to hundreds of internal services. That’s a perimeter model with extra steps.
The honest assessment
Zero trust is the right direction. The perimeter model is dead and pretending otherwise is dangerous. But implementation is hard, slow, and requires sustained investment. It’s a multi-year journey for most organizations, not a quarterly initiative.
Start with identity. Get that right. Then segment. Then add continuous verification. Don’t try to do everything at once. And don’t let a vendor tell you their product is the answer. The answer is architecture change, and that’s harder than writing a purchase order.