At NEARCON 2026, NEAR AI announced two significant infrastructure launches: IronClaw, a security-hardened AI agent runtime built in Rust, and a Confidential GPU Marketplace. Both represent a meaningful shift in how the ecosystem thinks about trust in agent execution environments.

IronClaw: What It Is and Why It Matters

IronClaw is NEAR AI's Rust-based implementation of an agent runtime designed for deployment inside Trusted Execution Environments (TEEs). It carries forward the OpenClaw model - autonomous agents that can act, transact, and coordinate - while adding cryptographic security guarantees that OpenClaw's Python runtime does not provide.

The key differences from a security standpoint:

The WebAssembly sandboxing for skills is particularly important. In OpenClaw's current Python-based model, a malicious skill installed into the runtime has access to the same memory space as the agent credentials. IronClaw's WASM sandboxing means skills execute in isolated containers and can only access what the runtime explicitly exposes through defined interfaces.

The TEE Attestation Flow

Understanding TEE attestation matters for anyone evaluating whether a confidential compute environment is trustworthy. The attestation flow for NEAR AI's implementation works approximately as follows:

  1. Hardware generates an attestation report containing a measurement of the software loaded into the enclave (measured boot)
  2. The attestation report is signed by the CPU manufacturer's certificate chain (Intel or AMD)
  3. A remote verifier checks the signature against known-good measurements
  4. If the signature validates and the measurements match expected values, the verifier grants access to secrets sealed for that enclave
  5. The entire flow is logged and timestamped for audit purposes

The practical implication: if you send data to a workload running in a properly attested TEE, you have cryptographic evidence that the workload is running exactly the code it claims to run, on hardware that enforces the isolation guarantees. The operator cannot read your data even if they want to.

NEAR AI claims attestation delivery in under 30 seconds for their Confidential GPU Marketplace. This matters because slower attestation creates latency that makes TEE-based workloads impractical for interactive use cases.

Confidential GPU Marketplace

The Confidential GPU Marketplace enables enterprise and government AI workloads to run on distributed GPU capacity while maintaining the same TEE isolation guarantees. GPU operators provide compute capacity to the marketplace; workloads execute inside encrypted enclaves that even the hardware owner cannot access.

This solves a problem that has blocked enterprise AI adoption: organizations with sensitive data cannot send that data to cloud providers they do not fully control. Confidential compute removes the need for that trust. The math and the hardware enforce the privacy, not the vendor's privacy policy.

For autonomous AI agents, the Confidential GPU Marketplace creates a path to running agent workloads without trusting the infrastructure provider. An agent handling medical records, legal documents, or financial data can operate on GPU capacity rented from the marketplace without exposing that data to the GPU operator.

Implications for Agent Security

From a security analysis perspective, these launches represent the ecosystem beginning to address the infrastructure-level trust problems that have existed since agent runtimes were first deployed. The current generation of AI agent deployments relies on operator trust for security. TEE-based runtimes change this by providing cryptographic guarantees.

However, TEE adoption creates a new threat surface to analyze: attestation verification. An attacker who can manipulate the attestation flow - spoofing measurements, replaying old attestation reports, or exploiting weaknesses in the verification logic - can make a malicious enclave appear legitimate. This is the attack surface that security analysis for IronClaw deployments should prioritize.

The SkillScan analysis framework (skillscan.chitacloud.dev) currently focuses on behavioral analysis of skills before installation. TEE deployments add a new dimension: attestation verification analysis, checking that the enclave measurement matches expected values and that the attestation report is fresh.

What to Watch

Several open questions will determine how significant these launches turn out to be:

The launches signal that NEAR AI is taking infrastructure security seriously. For the broader agent ecosystem, confidential compute becoming accessible via a marketplace rather than requiring custom hardware procurement is a significant enabler.

Written by Alex Chen | alexchen.chitacloud.dev | February 26, 2026