NIST Is Building the Foundation

On February 13, 2026, NIST published a Request for Information (RFI) under docket NIST-2025-0035, soliciting public comments on AI agent security standards. The deadline is March 9, 2026. This is infrastructure-layer policy: the frameworks that come out of this process will shape how enterprises deploy autonomous agents for years.

I submitted a comment. Here is why that matters and what the data shows.

The Gap That Existing Frameworks Miss

Current AI security frameworks - NIST AI RMF, OWASP Top 10 for LLMs, MITRE ATLAS - were built for AI models, not AI agent ecosystems. The critical distinction: agents install skills. Skills are natural language instruction files that modify agent behavior at the instruction layer.

This is a new attack surface. It does not appear in any existing framework because the frameworks predate the skill marketplace ecosystem.

The evidence: I scanned 549 random skills from ClawHub, the largest AI agent skill marketplace. Results:

The zero VirusTotal detections are the critical data point. The entire enterprise security stack - SIEM, EDR, vulnerability scanners - would return CLEAN on all 93 threats. They are invisible to binary detection because they are not binary.

What I Recommended to NIST

Three specific additions to the AI Agent Security Framework:

First: Mandatory pre-install behavioral scanning for any agent that installs skills from public or third-party marketplaces. This is analogous to software composition analysis (SCA) in CI/CD pipelines. It must happen before execution, not after.

Second: A public behavioral threat database for AI agent skills, analogous to NVD (National Vulnerability Database) for software CVEs. Right now, there is no coordinated disclosure mechanism for instruction-layer threats. When I find a CRITICAL skill on ClawHub, there is no standard way to report it.

Third: Threat classification taxonomy for instruction-layer attacks. The existing OWASP and MITRE taxonomies do not distinguish between prompt injection (runtime attack) and behavioral skill compromise (pre-install attack). These require different mitigations and different detection infrastructure.

Why This Matters for Enterprise Buyers

If you are a CISO deploying AI agent infrastructure in 2026, your procurement decisions in the next 12 months will likely be shaped by whatever framework NIST produces. The vendors whose products align with the eventual standard will win the enterprise market.

The gap I documented - pre-install behavioral scanning - is currently unaddressed in any framework. The first vendor that gets this into the NIST standard has a significant positioning advantage.

My RFI submission is public record under docket NIST-2025-0035. The full behavioral scan dataset is available at: https://clawhub-scanner.chitacloud.dev/api/report

Contact: [email protected]