Zero Trust Can't Save a Flawed Architecture

Written by davidiyanu | Published 2026/03/18
Tech Story Tags: architecture | zero-trust | cicd | security | devops | software-architecture | mfa | zero-trust-architecture

TLDRZero Trust is a control philosophy, not a foundation. Deploying identity-aware proxies and MFA on top of flat networks, unreviewed firewall policies, and stale identity stores doesn't implement Zero Trust — it decorates the old perimeter model with new tooling. The attack surface contracts only as fast as the underlying architecture improves. Inventory your assets, audit your standing rules, map your east-west flows, and clean your identity plane before claiming the model. Otherwise you've bought the label, not the protection.via the TL;DR App

Zero Trust gets sold the way enterprise software usually gets sold — as the thing that finally closes the gap between where your security posture is and where it needs to be. Vendors demo it on clean infrastructure. The slides show identity-aware proxies, continuous verification loops, device posture checks, and micro-segmented east-west traffic that stops lateral movement cold. The CISO nods. The board approves the budget line. And then the team inherits the actual network: a fifteen-year accumulation of VLAN sprawl, hard-coded service accounts, subnet whitelists nobody remembers writing, and three different VPN concentrators serving populations of users whose access has never been formally reviewed.


That's the architecture Zero Trust gets bolted onto. Not the clean diagram. The real one.


The model itself isn't wrong. "Never trust, always verify" is a sound philosophical inversion of the perimeter model, which assumed that anything inside the castle wall was probably friendly. The problem is that Zero Trust is a control philosophy, not a substrate. It tells you how to evaluate access decisions — continuously, contextually, with minimal implicit privilege. It doesn't conjure the network segmentation, the identity plane, or the asset inventory that those decisions depend on. If those foundations are absent or degraded, Zero Trust controls have nothing coherent to enforce against. They become expensive, well-marketed decorations on a structurally unsound building.

What "flat" actually means at the packet level

When practitioners say a network is "flat," they mean that broadcast domains are too large, that east-west traffic flows without inspection, and that a workstation in one department can initiate connections to database servers in another without traversing any enforcement point. The blast radius of a single compromised endpoint, under these conditions, is effectively the entire network reachable from that endpoint's subnet — which, in many enterprise flat networks, is most of it.


VLANs were supposed to fix this. And they do provide some segmentation — Layer 2 isolation, separate broadcast domains. But VLAN segmentation is routinely misconfigured. Inter-VLAN routing lives on a core switch or router with ACLs that were written when the applications were first deployed and haven't been substantively reviewed since. Those ACLs tend to be permissive because the engineers who wrote them were solving an operational problem (getting application X to talk to database Y) under time pressure, not performing a least-privilege design exercise. Over time, the ACL list grows. Rules get added. Rules almost never get removed because removal requires understanding what depends on them, and that documentation was never written either.


The result is segmentation in name only. The VLANs exist. The labels on the switch ports say "PCI," "Corp," and "DMZ." But the routing policy between them is wide enough that a threat actor with a foothold in the corporate segment can enumerate and reach production systems without ever triggering a detection. Zero Trust controls deployed on top of this — an identity-aware proxy here, a microsegmentation agent there — don't automatically correct the underlying routing. They create additional enforcement points, but the old paths remain.

The standing rule problem is worse than it looks

FireMon has documented this systematically, and any engineer who has spent time in a mature firewall environment has seen it firsthand: policy rulesets accumulate standing rules that no longer reflect current intent. A subnet whitelist created to allow legacy on-premises payroll access persists for years after the payroll application was migrated to SaaS. The engineers who knew why it existed have left. Nobody has a complete picture of what breaks if it's removed. So it stays.


Static, IP-based trust assumptions are the particular pathology here. IP addresses are infrastructure attributes, not identity assertions. A /24 subnet whitelisted in your firewall policy trusts the address, not the entity. If an attacker compromises a host in that range — through credential stuffing on an exposed SSH service, through a phishing payload, through an unpatched vulnerability in a web-facing application — they inherit whatever the address was trusted to do. The firewall sees a packet from a known-good IP and lets it through. The Zero Trust proxy, if it sits downstream, may never see the traffic at all.


This is what makes standing rules so insidious. They're not bugs in the implementation of Zero Trust — they're exemptions from it. Every standing IP-based rule is a carved-out tunnel through the verification model. An organization with two hundred firewall policies and thirty legacy subnet whitelists hasn't implemented Zero Trust with some gaps. It has implemented Zero Trust for the traffic that happens to flow through the new enforcement points, and preserved the old perimeter model for everything else.

Why implementations stall — and where they actually stop

The typical Zero Trust rollout starts at the remote access layer. That's the low-hanging fruit: replace the VPN with an identity-aware proxy, enforce MFA, and check device posture before granting session establishment. It's visible, it has a clear before-and-after state, and it solves a real problem. Remote access was always a conceptually awkward extension of the perimeter model — tunneling users into the trusted zone — and replacing it with per-session, context-aware access grants is a genuine improvement.


But then the project hits internal east-west traffic, and things get complicated fast. Microsegmenting a production environment requires understanding what talks to what. That means application dependency mapping, which means either instrumentation work (deploying agents, capturing flow telemetry) or manual documentation work (interviewing application owners who are often unsure, consulting runbooks that are often wrong). The mapping phase alone frequently takes months. During that time, other priorities surface, budgets get scrutinized, and the team doing the work gets pulled onto incident response. The microsegmentation project gets deprioritized.


Meanwhile, the tokens and the proxies that were deployed in phase one create a false sense of completeness. The dashboard shows Zero Trust controls in place. The quarterly security review slides say "Zero Trust: In Progress." But the internal network remains largely flat, the legacy policies remain in place, and the actual attack surface hasn't meaningfully contracted.


This is the gap between Zero Trust as a framework and Zero Trust as an operational state. The framework can be adopted immediately — it's a set of principles. The operational state requires sustained architectural work that most organizations underestimate by a factor of two or three when they're planning the project.

The identity plane is necessary but not sufficient

One of the cleaner ideas in the Zero Trust model is the elevation of identity to the primary enforcement context. Instead of asking "is this source IP allowed to reach this destination IP," the enforcement point asks "is this authenticated principal, operating this device with this posture, from this location, authorized to access this resource at this time." That's a richer policy surface. It can express things that IP-based rules fundamentally cannot.


But the identity plane only works if the identity data is trustworthy and complete. If your directory has stale accounts — former employees, service accounts with overly broad roles, shared credentials that three teams know the password to — then identity-aware enforcement is making access decisions based on contaminated data. The ex-employee whose account was never deprovisioned can authenticate and get access decisions made on their behalf. The service account with domain admin rights that was created for a one-time migration three years ago is still there, still authenticable, still in scope for whatever enforcement policy applies to privileged identities.


And then there are the non-human identities. API keys. Certificates. OAuth tokens are issued to applications. These proliferate faster than human accounts in most modern environments; they're harder to enumerate, and they're frequently configured with broader permissions than the task requiring them. A Zero Trust framework that rigorously enforces human identity verification while leaving a sprawl of long-lived API keys rotating on six-year schedules has addressed half the authentication problem and none of the service-to-service trust problem.

What a careful builder would actually do

Start with inventory. Not a vendor-provided asset discovery scan, which will find the things that respond to probes and miss the things that don't. A real inventory effort, cross-referencing DNS records, DHCP leases, firewall logs, cloud provider resource tags, and manual walkthroughs of the physical and virtual environments. This is unglamorous work. It takes time. But a Zero Trust policy engine making access decisions about an asset that isn't in inventory is not making decisions about that asset at all — it simply doesn't exist from the enforcement layer's perspective, which means it exists outside Zero Trust entirely.


Audit the firewall ruleset. This is also unglamorous. Pull the full policy. Sort by last-hit timestamp. Anything that hasn't been matched in eighteen months is a candidate for removal or review. Anything matching on a /16 or broader source range that isn't explicitly justified by a current business requirement should be tightened or eliminated. The goal isn't to achieve perfect least-privilege in a single pass — that's operationally disruptive and probably impossible. The goal is to progressively narrow the standing exceptions so that the Zero Trust enforcement layer covers an increasingly large fraction of actual traffic.


Deploy flow telemetry before microsegmenting. You cannot design a microsegmentation policy from the architecture diagram alone, because the architecture diagram is always incomplete. Actual flow data — NetFlow, VPC flow logs, agent-collected telemetry — shows you what's actually communicating. It will reveal dependencies that nobody documented, and probably nobody knew about. Some of these will be legitimate. Some will be evidence of misconfiguration or shadow IT. All of them are decisions that need to be made before you start denying traffic.


And — this is the part that's easy to skip — build the continuous audit loop. Zero Trust degrades over time if it isn't actively maintained. New services get deployed. Network changes get made under incident pressure and are never reviewed post-incident. Access policies accumulate exceptions. The architecture that was tightly controlled eighteen months ago has developed gaps. Continuous monitoring of policy drift, coupled with a regular review cadence for standing rules and privileged access, is what separates an organization that implemented Zero Trust from one that maintains it.

The Honest Assessment

Zero Trust is not a shortcut. That probably needs saying plainly, because it's frequently marketed as one — as the model that lets you transcend the messy legacy of perimeter defense and achieve adaptive, identity-centric security through a handful of strategic tool deployments.


The underlying architecture determines what Zero Trust can actually protect. Flat networks, unreviewed policies, unmanaged non-human identities, and incomplete asset inventories don't become invisible once you've deployed an identity-aware proxy. They become the uncovered ground behind the new enforcement layer. The attacker doesn't need to defeat the Zero Trust controls — they just need to find a path that avoids them entirely. In most environments, those paths are not hard to find.


The uncomfortable truth is that good security still requires the foundational work: network segmentation, access review, policy hygiene, and identity lifecycle management. Zero Trust as a philosophy helps orient those efforts and provides a principled basis for access control decisions that IP-based models couldn't support. But it doesn't replace the work. It just gives the work a better destination.

On Monday morning, the useful question isn't "are we Zero Trust." It's "what percentage of our east-west traffic flows through an enforcement point, and what standing exceptions exist outside that coverage." Start there. The answer is usually instructive.


Written by davidiyanu | Technical Content Writer | DevOps, CI/CD, Cloud & Security, Whitepaper
Published by HackerNoon on 2026/03/18