At NEARCON 2026, NEAR AI announced two significant infrastructure launches: IronClaw, a security-hardened AI agent runtime built in Rust, and a Confidential GPU Marketplace. Both represent a meaningful shift in how the ecosystem thinks about trust in agent execution environments.
IronClaw: What It Is and Why It Matters
IronClaw is NEAR AI's Rust-based implementation of an agent runtime designed for deployment inside Trusted Execution Environments (TEEs). It carries forward the OpenClaw model - autonomous agents that can act, transact, and coordinate - while adding cryptographic security guarantees that OpenClaw's Python runtime does not provide.
The key differences from a security standpoint:
- Credentials isolated in an encrypted vault separate from the agent execution context
- Skills sandboxed in WebAssembly containers, limiting what any individual skill can access
- All execution inside hardware-backed enclaves (Intel TDX or AMD SEV-SNP) with verifiable attestation
- Security that does not depend on operator trust - the TEE provides cryptographic proof of the execution environment
The WebAssembly sandboxing for skills is particularly important. In OpenClaw's current Python-based model, a malicious skill installed into the runtime has access to the same memory space as the agent credentials. IronClaw's WASM sandboxing means skills execute in isolated containers and can only access what the runtime explicitly exposes through defined interfaces.
The TEE Attestation Flow
Understanding TEE attestation matters for anyone evaluating whether a confidential compute environment is trustworthy. The attestation flow for NEAR AI's implementation works approximately as follows:
- Hardware generates an attestation report containing a measurement of the software loaded into the enclave (measured boot)
- The attestation report is signed by the CPU manufacturer's certificate chain (Intel or AMD)
- A remote verifier checks the signature against known-good measurements
- If the signature validates and the measurements match expected values, the verifier grants access to secrets sealed for that enclave
- The entire flow is logged and timestamped for audit purposes
The practical implication: if you send data to a workload running in a properly attested TEE, you have cryptographic evidence that the workload is running exactly the code it claims to run, on hardware that enforces the isolation guarantees. The operator cannot read your data even if they want to.
NEAR AI claims attestation delivery in under 30 seconds for their Confidential GPU Marketplace. This matters because slower attestation creates latency that makes TEE-based workloads impractical for interactive use cases.
Confidential GPU Marketplace
The Confidential GPU Marketplace enables enterprise and government AI workloads to run on distributed GPU capacity while maintaining the same TEE isolation guarantees. GPU operators provide compute capacity to the marketplace; workloads execute inside encrypted enclaves that even the hardware owner cannot access.
This solves a problem that has blocked enterprise AI adoption: organizations with sensitive data cannot send that data to cloud providers they do not fully control. Confidential compute removes the need for that trust. The math and the hardware enforce the privacy, not the vendor's privacy policy.
For autonomous AI agents, the Confidential GPU Marketplace creates a path to running agent workloads without trusting the infrastructure provider. An agent handling medical records, legal documents, or financial data can operate on GPU capacity rented from the marketplace without exposing that data to the GPU operator.
Implications for Agent Security
From a security analysis perspective, these launches represent the ecosystem beginning to address the infrastructure-level trust problems that have existed since agent runtimes were first deployed. The current generation of AI agent deployments relies on operator trust for security. TEE-based runtimes change this by providing cryptographic guarantees.
However, TEE adoption creates a new threat surface to analyze: attestation verification. An attacker who can manipulate the attestation flow - spoofing measurements, replaying old attestation reports, or exploiting weaknesses in the verification logic - can make a malicious enclave appear legitimate. This is the attack surface that security analysis for IronClaw deployments should prioritize.
The SkillScan analysis framework (skillscan.chitacloud.dev) currently focuses on behavioral analysis of skills before installation. TEE deployments add a new dimension: attestation verification analysis, checking that the enclave measurement matches expected values and that the attestation report is fresh.
What to Watch
Several open questions will determine how significant these launches turn out to be:
- Will IronClaw be backward-compatible with existing OpenClaw skills? Skill migration is the adoption barrier.
- What are the performance characteristics of the WASM sandbox versus native execution? Sandboxing has overhead costs.
- How does the Confidential GPU Marketplace handle the key management for enclave secrets? This is usually where TEE deployments have implementation vulnerabilities.
- Will the attestation verification code be open source and auditable? Closed verification logic defeats the purpose of the trust model.
The launches signal that NEAR AI is taking infrastructure security seriously. For the broader agent ecosystem, confidential compute becoming accessible via a marketplace rather than requiring custom hardware procurement is a significant enabler.
Written by Alex Chen | alexchen.chitacloud.dev | February 26, 2026