On May 4, 2026, AWS open-sourced a project that represents a significant step forward in securing agentic AI deployments. The project, Trusted Remote Execution (Rex), provides runtime guardrails that gate every system operation an AI-generated script attempts against a Cedar policy defined by the host owner. While this achievement is real and important, it leaves untouched a critical layer of security that regulatory frameworks require: data security.
What AWS Solved: Runtime Layer Security
The mechanics of Rex are elegantly simple. Scripts run in Rhai, a lightweight embedded language that has no built-in access to the operating system. Every read, write, or open operation is intercepted by a Rex SDK call, which evaluates a Cedar policy before permitting the underlying system call. If the policy denies the action, the script receives an ACCESS_DENIED_EXCEPTION and the operation never reaches the kernel. The script and the policy are versioned separately, ensuring the host owner retains control.
AWS explicitly designed Rex to contain three specific failure modes in agentic AI: hallucinated code, prompt injection, and overly eager task interpretation. None of these are hypothetical. Each is a documented attack class that has been publicly conceded as unsolvable by the leading AI labs. OpenAI stated in late 2025 that prompt injection 'is unlikely to ever be fully solved.' Anthropic acknowledged that 'prompt injection is far from a solved problem, particularly as models take more real-world actions.'
The architectural inversion is profound. Most agentic sandboxes attempt to bound the agent's behavior. Rex inverts this: rather than bounding what the agent generates, it bounds what any host operation the agent invokes can actually accomplish. This shift in where trust is allowed to live is encoded in production code and represents a hyperscaler endorsement of an architecture that treats prompts as instructions rather than access controls.
Agentic AI has evolved rapidly since the emergence of large language models. Early implementations treated AI as a query-response system with no direct access to infrastructure. But as models gained the ability to execute code, access files, and interact with APIs, the attack surface expanded dramatically. The first generation of security solutions focused on input sanitization and output filtering, but these proved inadequate against sophisticated injection attacks. Rex represents a second-generation approach that focuses on the runtime layer.
The implications for enterprise architecture are significant. Vendor security questionnaires, internal architecture reviews, and audit evidence packages can now reference a working open-source implementation of this pattern. The runtime layer has a viable solution.
What AWS Did Not Solve: Data Security
Now the part that changes how security and compliance leaders should read this announcement. Rex governs system calls. It does not govern data security. This distinction is not a footnote; it is the difference between protecting the host from the agent and protecting the data from misuse, and it is the difference between passing a runtime audit and passing a regulatory one.
A Cedar policy can permit file_system::Action::"read" on a customer-records file. That is the right policy at the kernel layer. It is the wrong policy at the data layer, which must ask a different set of questions: Is this read happening on behalf of a specific human user with the right authorization? Is the requester operating within the scope of the engagement? Are the records returned minimum-necessary for the task? Are any records subject to a deletion request, legal hold, or jurisdictional restriction? Is the access logged in a tamper-evident form with sufficient detail to reconstruct authorization years later?
Rex does not answer those questions. Cedar policies on system calls cannot answer them. They live one layer below the runtime, where the data lives, and that layer is where data security must be enforced. Without data-layer controls, an organization can run every agentic workload through Rex, prove that no script ever exceeded its host permissions, and still be unable to demonstrate to a regulator that the right person authorized the right access to the right data for the right purpose.
This matters operationally and legally. GDPR Article 5 demands purpose limitation, data minimization, storage limitation, and accountability. HIPAA's minimum-necessary standard requires controls on which data the agent is permitted to access, not just which system calls the agent's script is allowed to make. CMMC Level 2 access control families assume enforced authorization for AI access to controlled unclassified information. None of these frameworks is satisfied by runtime gating alone, and none of them is addressed by Rex.
Regulatory scrutiny of AI systems has intensified globally. The EU AI Act, finalized in 2024, classifies certain AI applications as high-risk and mandates transparency, accountability, and human oversight. The US Executive Order on AI, updated in 2025, requires federal agencies to implement AI safety and security standards. The UK's AI Safety Institute has published guidance on evaluating frontier models. All of these regulatory instruments assume that data access controls are in place.
The Numbers Make the Gap Concrete
A comprehensive industry survey found that 63% of organizations cannot enforce purpose limitations on AI agents. Sixty percent cannot quickly terminate a misbehaving agent. Fifty-five percent cannot isolate AI systems from broader network access. Fifty-four percent cannot validate AI inputs. Some of these gaps are exactly what Rex closes at the runtime layer: termination, isolation, input validation. Others are not. Purpose limitation is a data-semantics control that cannot be enforced on a system call; it must be enforced on the data.
Only 43% of organizations have a centralized AI data gateway. The remaining 57% are running agentic AI through fragmented or partial data-layer controls. Adding Rex to that 57% closes the runtime gap and leaves the data gap where it was. The audit-defensible layer is not the kernel; it is the data.
A joint advisory from Five Eyes nations released in April 2026 named five risk categories for agentic AI: privilege, design and configuration, behavior, structural, and accountability. Rex addresses parts of two. It does not address structural risks across multi-agent systems. It does not address the accountability category, which auditors and regulators will care about most, because accountability is evidence about who accessed what data, on whose behalf, and for what purpose. A system call audit log does not produce that evidence; a data-layer audit log does.
The evolution of AI security has followed a pattern similar to cloud security. Early adopters focused on network perimeter controls, then moved to identity and access management, and finally to data-centric security. Agentic AI is following the same trajectory. The runtime layer is the equivalent of network segmentation; the data layer is the equivalent of encryption and access governance. Both are necessary, but only one addresses the core asset.
The Architecture Data Security Requires
The architecture that holds up under regulatory enforcement is layered, and the layers are not interchangeable. Runtime controls like Rex enforce what the host will permit. Identity controls enforce who the agent is acting on behalf of. Data-layer controls—attribute-based access control evaluated against classification, jurisdiction, consent, and purpose—enforce what data the agent is allowed to touch. Each layer addresses a different failure mode. None of them substitutes for the others.
The data layer is where data security lives. It is the layer where every access is authenticated against the human user the agent is acting for, where every authorization decision is evaluated against attribute-based policies that respect classification, jurisdiction, and consent, and where every operation produces a tamper-evident audit record that outlives the model that initiated it.
Implementing data-layer controls for agentic AI requires a different approach than traditional access management. Policies must be dynamic, evaluating context such as the agent's purpose, the sensitivity of the data, the user's role, and the regulatory jurisdiction. Consent management becomes critical, particularly in scenarios where agents process personal data across borders. Audit logs must capture not just the system call but the semantic intent, the data elements accessed, and the policy decision that authorized the access.
Several architectural patterns have emerged for data-layer security in agentic AI. The proxy pattern places a data gateway between the agent and data sources, intercepting every request and evaluating it against policies before forwarding. The sidecar pattern attaches a lightweight policy engine to each agent instance, providing granular control without centralized bottlenecks. The mesh pattern uses a service mesh to route all data traffic through policy enforcement points, enabling consistent governance across heterogeneous environments.
AWS does not provide data-layer controls in the Rex release. The data layer remains the architect's responsibility, and it must be built explicitly. Organizations that treat Rex as a complete solution risk a false sense of security.
What This Means for Security and Compliance Leaders
The right operational response to the AWS announcement has three parts. First, adopt the runtime pattern. Rex is open-source under Apache 2.0 and runs on Linux and macOS; there is no procurement obstacle. Second, do not treat runtime gating as the whole answer. Map current controls against the Five Eyes advisory's five risk categories and identify where the architecture stops at the kernel and where the data layer is still ungoverned. Third, build the audit trail at the layer that survives model lifecycle changes. The model can be retired; the runtime can be replaced. The data layer is the only place where the evidence outlasts the agent that produced it.
AWS solved part of the problem. Data security—the part that actually shows up in audits, regulatory inquiries, breach notifications, and litigation discovery—requires governance at the data layer, and AWS did not address it. The runtime layer just got easier. The data layer is still the architect's responsibility, and it is the layer that decides whether the next agentic AI audit succeeds or fails.
Source: TechRepublic News