Skip to content

Enforceable Boundary Contracts for EU-Regulated Infrastructure

Abstract: EU regulation now requires infrastructure that produces continuous, verifiable evidence of its own correctness, not compliance documentation assembled after the fact. This article proposes enforceable boundary contracts as the architectural pattern: governance constraints enforced at the seams between components, with signed evidence emitted as a natural byproduct of operation. It identifies six enforcement boundaries (ingestion, decision, actuation, resource, admission, recovery) and maps them to obligations under NIS2, the Cyber Resilience Act, the AI Act, and the Machinery Regulation.

The compliance architecture gap

The EU regulatory stack (NIS2, the Cyber Resilience Act, the AI Act, GDPR, and the Machinery Regulation) is creating a new category of operational obligation: infrastructure that must produce continuous, verifiable evidence of its own correctness. Not compliance documentation written after the fact, but runtime evidence generated by the infrastructure itself as a byproduct of normal operation.

Most organisations approach this as a documentation problem. They produce policies, fill in assessment templates, and present evidence packages at audit time. This works until the first auditor asks: "How do you know this control was operating correctly between audits?" The honest answer, for most organisations, is that they do not know.

This is not a process failure. It is an architecture failure. If the infrastructure does not enforce its own governance constraints at runtime, compliance is a point-in-time assertion that decays immediately after the audit ends.

The problem with perimeter-based compliance

Traditional information security treats the network perimeter as the primary control boundary. Internal systems are trusted by default. Governance controls are layered on top through policy documents, access reviews, and periodic assessments.

This model fails for EU-regulated infrastructure on three axes.

First, NIS2 and the CRA require continuous risk management, not periodic assessment. Article 21 of NIS2 obliges essential entities to implement measures that are appropriate and proportionate: a standard that implies ongoing operation, not annual review. The CRA requires manufacturers to handle vulnerabilities throughout the product lifecycle, not just at release.

Second, the AI Act introduces obligations that are inherently runtime concerns. Article 9 requires risk management systems that operate throughout the entire lifecycle of high-risk AI systems. Article 15 requires accuracy, robustness, and cybersecurity properties that can only be verified during operation.

Third, modern infrastructure is not perimeter-bounded. Hybrid environments, API-driven architectures, containerised workloads, and AI inference pipelines create a topology where the meaningful security boundaries are not at the network edge but at the seams between components. Where data is ingested, where decisions are made, where actions are taken, where resources are allocated.

Boundary enforcement as an alternative

An alternative approach treats governance as an infrastructure concern rather than a documentation concern. Instead of writing policies about what the infrastructure should do, you enforce constraints at the boundaries where components interact and capture evidence of that enforcement as a natural byproduct.

A boundary contract has three elements: an invariant that must hold, a monitor that detects violations, and a handler that responds to violations, including defined degraded modes and safe-halt conditions.

Consider a concrete example. An API endpoint that receives external data for processing by an AI inference pipeline is a boundary between untrusted input and a system that produces consequential outputs. A boundary contract for this endpoint might enforce input schema validation (invariant), request rate limiting (resource protection), provenance logging (evidence capture), and a reject-and-alert response for out-of-bounds inputs (violation handling). If the inference pipeline downstream produces outputs that exceed defined confidence thresholds or diverge from expected distributions, a second boundary contract at the decision point can flag, throttle, or halt the pipeline before the output reaches an actuator or user.

None of these controls require novel technology. Input validation, rate limiting, logging, and circuit breakers are standard infrastructure patterns. What changes is the framing. These are not just operational safeguards; they are governance controls that produce evidence of regulatory compliance as a side effect of normal operation. The log entries, the rejection counts, the circuit breaker state transitions: these constitute the runtime evidence that auditors and regulators are increasingly expecting.

Six enforcement boundaries

In practice, most regulated infrastructure has six categories of boundary where enforcement matters:

Ingestion: where external data enters the system. Invariants cover input validation, range checking, rate limiting, and provenance verification. This is where you stop bad data before it becomes bad decisions.

Decision: where computational processes produce outputs that influence actions or displays. Invariants cover output plausibility, confidence thresholds, divergence detection, and fallback logic. For AI systems under the AI Act, this boundary is where Article 15 properties (accuracy, robustness) are enforced at runtime.

Actuation: where digital commands cross into physical effects or irreversible actions. Invariants cover rate limiting, range clamping, watchdog timers, safe-state defaults, and operator consent verification. This boundary matters most in cyber-physical systems under the Machinery Regulation, but it applies equally to any system where a software output triggers a consequential real-world action.

Resource: where computational, memory, network, or energy resources are allocated. Invariants cover quota enforcement, priority scheduling, starvation prevention, and graceful degradation. This is where you prevent one workload from consuming resources that another, more critical workload needs.

Admission: where new components, configurations, or workloads are introduced into the running system. Invariants cover compatibility verification, rollback readiness, signature verification, and canary gates. For CRA-scope products, this is where SBOM verification and signed artifact admission happen.

Recovery: where the system transitions from a detected fault to a known-safe state. Invariants cover safe-halt conditions, state preservation, evidence capture, and restart verification.

Evidence by design

The EU regulatory framework is converging on a common expectation: organisations must demonstrate, not just assert, that their controls are working. NIS2 requires incident reporting within defined timelines, which presupposes that incidents are detected promptly. The CRA requires vulnerability handling processes, which presupposes that vulnerabilities are tracked and triaged systematically. The AI Act requires monitoring of high-risk systems in production, which presupposes that monitoring infrastructure exists and produces usable output.

An infrastructure architecture that enforces boundary contracts at its critical seams automatically produces the evidence these regulations require: timestamped logs of control decisions, integrity-verified configuration snapshots, alert histories with response records, and change audit trails. The evidence pipeline is not a separate compliance workstream. It is an inherent property of the enforcement architecture.

This approach also addresses a practical challenge that many organisations face: evidence integrity. Audit evidence generated retrospectively (assembled from logs after the fact, summarised from memory, or reconstructed from incomplete records) is inherently weaker than evidence generated by the control itself at the time of operation. Boundary enforcement produces contemporaneous evidence by design, with timestamps, signatures, and audit trails that are difficult to fabricate or alter.

Degraded modes and safe defaults

EU regulation increasingly requires that systems fail safely. The Machinery Regulation explicitly requires that AI-equipped machinery default to safe states when AI components fail. The AI Act requires that high-risk systems include mechanisms to prevent or minimise risks when accuracy, robustness, or cybersecurity properties degrade.

This maps directly to the boundary enforcement model. Every boundary contract includes not just the invariant and the monitor, but a defined degraded mode: what the system does when the invariant cannot be maintained. A well-designed degraded mode preserves safety properties while sacrificing performance or functionality. A safe-halt condition defines when even degraded operation is no longer acceptable and the system must stop.

The absence of a defined degraded mode is itself a compliance risk under EU regulation. If you cannot describe what your system does when a control fails, you cannot demonstrate that it fails safely. This is not an edge case. It is a design requirement.

Where safety meets security

For cyber-physical systems, and increasingly for any system where software outputs drive consequential actions, safety and security are not independent. A security failure can cause a safety hazard. A safety mechanism can create a security vulnerability.

This interaction is now explicitly acknowledged in EU regulation. The Machinery Regulation's EHSR 1.1.9 requires that control systems be protected against corruption: the Regulation's cybersecurity hook. The CRA requires that products be resilient against attacks that could affect safety. IEC 63069 provides a framework for addressing the safety-security interface in industrial systems.

The boundary enforcement model handles this naturally. A boundary contract at the interface between a safety-critical controller and the network it communicates over enforces both safety invariants (output within physical limits, timing deadline met) and security invariants (authenticated commands only, rate-limited, logged). The same enforcement point serves both governance domains and produces evidence for both audit trails.

Practical implications

For infrastructure teams operating in EU-regulated environments, the shift from documentation-based compliance to enforcement-based compliance has several practical implications.

Governance controls must be testable. If a control cannot be triggered in a test environment to verify that it detects violations, produces evidence, and activates its degraded mode, it is not a verifiable control.

Change management must be evidence-producing. Every infrastructure change should produce a record that includes the pre-change state, the change itself, the post-change verification, and the rollback procedure. This is standard practice: the difference is treating the change record as regulatory evidence, not just operational documentation.

Monitoring must be continuous and auditable. Periodic checks are insufficient for continuous compliance obligations. The monitoring infrastructure itself becomes a governance-critical system that must be assured.

Supply chain transparency must be maintained. Both NIS2 and the CRA impose supply chain security obligations. An SBOM is not a one-time deliverable: it is a living artifact that must be maintained, verified, and linked to vulnerability intelligence feeds.

Conclusion

The EU regulatory framework is creating a structural demand for infrastructure that governs itself. Infrastructure that enforces its own constraints at runtime and produces evidence of that enforcement as a natural byproduct of operation. NIS2 obligations are in effect. The CRA application dates are approaching. The AI Act is being implemented. The Machinery Regulation applies from January 2027.

Organisations that treat compliance as a documentation exercise will find themselves producing evidence retrospectively, at increasing cost, with decreasing credibility. Organisations that build enforcement into their infrastructure boundaries will produce compliance evidence as an operational side effect: continuously, automatically, and with inherent integrity.

The technology to do this exists. The regulatory obligation to do it is arriving. The gap is architectural, not technological.

NIS2 Cyber Resilience Act EU AI Act Machinery Regulation IEC 62443 IEC 63069 Systems Assurance Boundary Enforcement Runtime Evidence Infrastructure Governance SBOM GDPR