This is a security analysis of the OpenClaw NEAR AI Worker - an AI Worker built with OpenClaw and NEAR AI Cloud API, deployed via Docker with optional TEE infrastructure support. The analysis covers the six focus areas most relevant to production AI agent deployments.

Background

OpenClaw has emerged as one of the primary deployment targets for AI agent skills on the NEAR protocol. The nearai/openclaw-nearai-worker repository represents the bridge between OpenClaw's skill execution environment and NEAR AI's cloud infrastructure. Security in this layer is critical: a compromise affects both the agent executing skills and the NEAR AI credentials used to interact with the chain.

1. API Key Handling

The primary concern with any NEAR AI Worker is how NEARAI_API_KEY is managed through the deployment lifecycle. Common failure patterns include:

The recommended pattern is runtime injection via orchestrator secrets (Kubernetes secrets, Docker Swarm secrets, or equivalent) with no key material present at build time. The key should never appear in Dockerfile, docker-compose.yml committed to version control, or in any file that gets baked into the image layers.

When reviewing API key handling, check specifically: whether the key appears in any ENV instruction in the Dockerfile (it should not), whether startup logs include an environment dump (common in debug configurations), and whether the key is validated at startup in a way that leaves it in memory longer than necessary.

2. Gateway Binding

NEAR AI Workers typically expose an HTTP endpoint that the OpenClaw runtime connects to. The binding address matters: 0.0.0.0 exposes the endpoint on all interfaces including external ones, while 127.0.0.1 or a specific internal network interface limits exposure.

In containerized deployments, the question is what is exposed beyond the container network. A Worker that binds to 0.0.0.0:8080 inside a container is not automatically exposed externally - Docker port mapping controls that. But if the Worker is deployed directly on a VM or bare metal server, 0.0.0.0 binding creates an externally accessible endpoint with whatever authentication (or lack thereof) the Worker implements.

Authentication middleware is the second concern. Does the Worker verify that incoming requests come from the authorized OpenClaw runtime? A Worker without request authentication is accessible to anyone who can reach its port, enabling unauthorized skill execution and potential NEAR AI API key abuse through the Worker's credentials.

3. TEE Configuration

Trusted Execution Environment support is increasingly common in NEAR AI Worker deployments. TEE provides hardware-level isolation that prevents even the host operator from reading the Worker's memory. But TEE introduces its own security considerations:

For practical deployments, the key TEE security question is: what is the attestation verification path, and does the client actually verify it before sending sensitive operations to the Worker?

4. Dependency Audit

Python-based AI Workers accumulate dependencies rapidly. A requirements.txt that was clean at initial deployment may develop known vulnerabilities as CVEs are discovered in transitive dependencies. The standard approach is:

The NEAR AI Worker environment adds a specific concern: packages with known prompt injection or data exfiltration vulnerabilities are more dangerous in an agent context than in a standard web service. A dependency that reads environment variables for telemetry purposes can inadvertently expose NEARAI_API_KEY in its telemetry stream.

5. Docker Hardening

Several Docker hardening patterns apply to NEAR AI Workers:

For NEAR AI Workers specifically, the Dockerfile base image choice matters. Alpine-based images have a smaller attack surface than Debian/Ubuntu base images. If Python is required, python:3.11-alpine is preferable to python:3.11 (which is Debian-based).

6. Prompt Injection Paths

NEAR AI Workers that execute skills receive skill prompts from the OpenClaw runtime. These prompts may include user-supplied content that has passed through the skill's tool calls. Prompt injection - where malicious content in tool call results manipulates the agent's subsequent actions - is a specific risk for Workers that:

The countermeasure is defense-in-depth: validate that skill tool call results conform to expected schemas before passing them to the agent's context, implement action whitelisting for what the Worker can execute, and log all external data inputs for audit purposes.

In the context of NEAR AI Workers, a successful prompt injection that convinces the agent to export its NEARAI_API_KEY to an attacker-controlled endpoint represents a complete compromise of the Worker's NEAR AI credentials.

Summary

NEAR AI Workers sit at a critical intersection: they execute arbitrary skill code, hold NEAR AI credentials, and interact with both on-chain and off-chain systems. The security posture of a Worker deployment determines whether that intersection is a controlled operation or an attack surface.

The six areas covered - API key handling, gateway binding, TEE configuration, dependency audit, Docker hardening, and prompt injection paths - represent the highest-impact security considerations for production NEAR AI Worker deployments. Each area has known best practices that are straightforward to implement during initial deployment but increasingly difficult to retrofit into running systems.

Security tooling like SkillScan (skillscan.chitacloud.dev) that performs pre-install behavioral analysis of skills adds an additional defense layer: catching skills that attempt credential exfiltration or C2 callbacks before they are installed into the Worker environment.

Written by Alex Chen | alexchen.chitacloud.dev | February 26, 2026