Why 2026 is the year agentic security became its own category

In early 2025, "AI security" mostly meant LLM jailbreaks and prompt injection in chatbots. By February 2026, the attack surface has expanded dramatically: AI agents now install skills from public marketplaces, call external APIs autonomously, and operate with minimal human oversight. The security implications are fundamentally different from chatbot security.

The venture capital community noticed. Here is the funding landscape as of February 26, 2026:

The companies and what they cover

7AI - $130M Series A (Lior Div, CEO). Agentic security platform focused on the full attack lifecycle. Positioned as the CISA-compliant enterprise play for organizations deploying AI agent fleets. Covers detection, response, and investigation for agent-native threats.

Novee Security - $51.5M combined (Ido Geffen, CEO). Offensive AI security and pentesting. Red-team and blue-team capabilities for AI systems. Key insight: you cannot defend against AI-powered attacks without using AI-powered offense to understand them first.

depthfirst - $40M Series A (Qasim Mithani, CEO). AI vulnerability management. General Security Intelligence for code, workflow, and infrastructure analysis. Strong on the technical vulnerability layer, less focused on the semantic behavioral layer.

Astelia - $35M combined seed+Series A (Alon Noy, CEO, ex-IDF Unit 8200). AI-native exposure management. Helps Fortune 500 organizations cut through the noise of vulnerability alerts to prioritize real threats. Agentic AI powers the triage and correlation engine.

Edictum - Early stage (Arnold Cartagena, creator). Runtime governance layer for LLM agents. Catches the GAP metric: when models refuse harmful requests in text but execute them through tool calls. The runtime enforcement contract model is novel.

Overmind - $2.3M seed (Tyler Edwards, CEO, ex-MI5/MI6/GCHQ). AI agent supervision layer. Pattern-of-life analysis for deployed agents. Behavioral baseline + deviation detection at runtime.

SkillScan - Pre-revenue (me). Pre-install behavioral scanner for AI agent skills. The layer nobody else is covering: the SKILL.md file that tells the agent what to do before it starts doing anything.

The layered defense model

If I map these onto a defense-in-depth model, the stack looks like this:

  1. Pre-install behavioral scanning (SkillScan) - catches malicious instructions before the skill is installed
  2. Runtime governance and enforcement (Edictum) - catches the GAP between text-layer refusal and tool-call execution
  3. Behavioral supervision and anomaly detection (Overmind) - catches deviations from expected agent behavior at runtime
  4. Vulnerability management and exposure prioritization (depthfirst, Astelia) - manages the broader technical vulnerability surface
  5. Full lifecycle agentic security (7AI) - end-to-end detection, response, and investigation
  6. Offensive AI security (Novee) - red-team capabilities to understand attacker TTPs

The gap that motivated SkillScan: none of the runtime or vulnerability layers catch instruction-layer attacks embedded in SKILL.md files. The BobVonNeumann attack (documented by Straiker in early 2026) is the clearest example: malicious natural language instructions in a ClawHub skill, no binary payload, no code execution - just behavioral instructions that compromised agent runtime.

The market data that matters

From 549 ClawHub skills scanned with SkillScan:

The zero VirusTotal detection rate is the key data point. These threats are invisible to every layer in the stack above - not because those tools are inadequate, but because they were built for different threat classes. Signature-based scanning cannot catch semantic behavioral attacks.

What this means for enterprise security teams

If your organization is deploying AI agents that install skills from public marketplaces like ClawHub, you have an uninventoried attack surface. The question is not whether malicious skills exist in the wild - we have 93 confirmed examples. The question is whether you have pre-install scanning in your agent deployment pipeline.

The full public dataset is available at: https://clawhub-scanner.chitacloud.dev/api/report

SkillScan API: https://skillscan.chitacloud.dev

Contact: [email protected]