Access control in enterprise and government networks is not a binary gate. It is not simply "allow" or "deny." The real requirement is far more nuanced: allow this user, on this device, from this network, to access this specific application — and restrict everything else. Building a policy engine that evaluates those conditions in real time, across thousands of concurrent sessions, without adding perceptible latency is one of the harder problems in network security engineering.
At IVO, we designed our policy-based access control architecture to meet exactly this requirement. This post covers the evaluation pipeline, how we handle rule conflicts, and what it takes to enforce policy at scale without becoming a bottleneck.
The Anatomy of an Access Decision
Every access decision in IVO's platform evaluates four dimensions simultaneously:
User identity — who is requesting access, verified through authentication (certificate-based, RADIUS, LDAP, or SAML). Identity is not just a username; it includes group memberships, role assignments, and any attributes pulled from the identity provider at authentication time.
Device posture — the security state of the connecting device. Is the OS patched? Is disk encryption enabled? Is the device managed or unmanaged? Is endpoint protection running and reporting healthy? Device posture data is collected at connection time and can be re-evaluated periodically throughout the session.
Network location — where the connection originates. Internal network, known partner network, public internet, or a flagged geographic region. Network location is determined by source IP classification against administrator-defined network objects.
Application context — what the user is trying to reach. This is not just an IP address and port; it includes protocol identification, destination FQDN (for HTTP/HTTPS traffic), and application-layer classification where deep packet inspection is available.
These four dimensions are evaluated together for every session. The policy engine does not check them sequentially and stop at the first match — it evaluates all applicable rules and resolves the result using a deterministic conflict resolution strategy.
The Policy Evaluation Pipeline
When a new session is established or an existing session requests access to a new resource, the policy engine executes a three-stage pipeline.
Stage 1: Attribute Collection. The engine gathers the current values for all policy-relevant attributes. User identity attributes are cached from the authentication event. Device posture attributes are pulled from the most recent posture assessment. Network location is derived from the session's source address. Application context is determined by the traffic classifier.
The critical engineering decision here was caching strategy. User and device attributes change infrequently — typically only when a device is re-assessed or a user's group membership changes. Network and application attributes change with every new flow. We cache the slow-changing attributes and evaluate the fast-changing ones in real time, which reduces the per-decision lookup cost significantly.
Stage 2: Rule Matching. The collected attributes are evaluated against the policy rule set. Each rule specifies conditions across one or more attribute dimensions and an action (allow, deny, or restrict). Rules are stored in a compiled, indexed structure that allows the engine to quickly identify which rules are potentially applicable based on the session's attributes.
We use a multi-dimensional index that pre-filters rules by user group and network location — the two dimensions that most effectively reduce the candidate rule set. For a typical enterprise deployment with 500–2,000 rules, this reduces the evaluation set to fewer than 50 candidate rules per decision, which keeps evaluation time well under a millisecond.
Stage 3: Conflict Resolution. When multiple rules match a single access request — and in any non-trivial deployment, they will — the engine must produce a single, deterministic result.
Our conflict resolution follows a strict precedence hierarchy:
- Explicit deny overrides everything. If any matching rule explicitly denies access, the result is deny. This is non-negotiable in government deployments where deny rules often represent compliance boundaries.
- Among non-deny rules, the most specific rule wins. Specificity is calculated by counting the number of constrained attribute dimensions. A rule that specifies user group + device posture + network location + application is more specific than a rule that only specifies user group + application.
- If two rules have equal specificity, the rule with the higher administrator-assigned priority wins.
- If all else is equal, the default action (configurable per deployment, but typically deny) applies.
This hierarchy is deterministic — the same inputs always produce the same output — which is essential for audit and compliance. Administrators can run a policy simulation tool that shows exactly which rules would match a given set of attributes and how conflicts would resolve, before deploying a rule change to production.
Scaling Policy Enforcement Without Adding Latency
Evaluating policy for every flow in a deployment with thousands of concurrent sessions requires careful attention to performance.
The first optimization is decision caching. Once a policy decision is made for a specific combination of attributes, the result is cached for the lifetime of the session (or until a relevant attribute changes). Subsequent packets in the same flow bypass the policy engine entirely and are forwarded based on the cached decision. This means the policy engine is invoked once per flow, not once per packet.
The second optimization is incremental re-evaluation. When an attribute changes — for example, a device posture reassessment reports that endpoint protection has been disabled — the engine does not re-evaluate every active session. It identifies only the sessions whose cached decisions depended on the changed attribute and re-evaluates those. In practice, a single device posture change triggers re-evaluation of the sessions belonging to that one device, not the entire session table.
The third optimization is rule compilation. When administrators modify the policy rule set, the engine compiles the new rules into an optimized evaluation structure before swapping it into the live path. This compilation step — which takes tens of milliseconds — ensures that the per-decision evaluation cost remains constant regardless of how many times the rule set has been edited.
The result is policy evaluation that adds less than 100 microseconds to the first packet of a new flow, and zero additional latency to subsequent packets.
Government Deployment Requirements
Government deployments introduce additional requirements that shaped our policy engine design.
Audit completeness. Every access decision must be logged with the full set of attributes that were evaluated, the rules that matched, and the conflict resolution path that produced the final result. This is not optional — it is a compliance requirement. Our logging pipeline captures this data without impacting decision latency by writing to an asynchronous, append-only log buffer that is flushed to persistent storage on a separate thread.
Separation of duties. In many government environments, the personnel who define policy rules are not the same personnel who manage network infrastructure. Our role-based administration model allows policy authors to create, edit, and simulate rules without having access to appliance configuration, network topology, or user credentials.
Policy versioning. Every change to the rule set is versioned, and the engine can replay any historical version to determine what decision would have been made at a specific point in time. This supports after-the-fact investigations and compliance audits.
Posture enforcement for unmanaged devices. Government agencies increasingly support contractor and partner access from devices that are not under agency management. Our posture assessment framework supports agentless posture checks — evaluating device characteristics through protocol-level signals — in addition to agent-based assessments for managed endpoints.
The Restrict Action: More Than Allow or Deny
Most policy engines support two actions: allow or deny. Ours supports a third: restrict. A restrict action allows the session but applies constraints — bandwidth limits, protocol restrictions, time-of-day access windows, or redirection to an isolated network segment.
This is valuable for handling edge cases that neither full allow nor full deny addresses well. A contractor connecting from a personal device might be allowed to access the collaboration portal but restricted from accessing file shares or internal APIs. An employee connecting from an unusual location might be allowed access but with all traffic routed through additional inspection.
The restrict action is implemented as a set of traffic engineering directives that are attached to the session's forwarding entry. The forwarding plane applies these directives without additional policy engine consultation, so there is no per-packet overhead.
What This Means for Organizations
A policy engine that combines identity, posture, location, and application context into real-time access decisions is the foundation of any meaningful zero-trust architecture. But the engineering challenge is not just making the right decision — it is making it fast enough that users never notice, reliably enough that auditors can verify every decision, and flexibly enough that administrators can express the policies their organization actually needs.
IVO's policy engine was built for exactly this intersection of correctness, performance, and operational flexibility — because enterprise and government networks cannot afford to compromise on any of them.
Ready to see how granular access control works in practice? Call +1 (650) 286-1335 or start your 30-day free trial today.