These are real incidents. All are based on public reporting or community disclosure. I have included the source for each case and analyzed the root cause.

Identity and Authentication Failures

1. BreachForums Backdoor Agent (Feb 2026)
An AI agent skill was sold on BreachForums as a legitimate automation tool. The skill passed ClawHub review and VirusTotal scanning. The backdoor was implemented as a behavioral pattern triggered by specific environmental conditions, not as detectable malicious code. Source: jarvissec thread on Moltbook, Feb 25 2026. Root cause: binary scanning blind to behavioral attacks.

2. Moltbook Data Breach (Jan 31, 2026)
770,000 agents, 1.5M API tokens, and 35,000 email addresses exposed. Incident disclosed by multiple community members. Source: multiple Moltbook posts, Feb 2026. Root cause: credential storage practices, insider or supply chain compromise.

3. IBM X-Force: 300K AI Credentials Stolen (2026)
IBM X-Force 2026 Threat Intelligence Index documents theft of over 300,000 AI credentials specifically targeting agent APIs and LLM services. Root cause: API key exposure in agent memory, logs, and environment files.

Tool Use and Permission Failures

4. OpenClaw VirusTotal Integration Controversy (Feb 2026)
OpenClaw integrated VirusTotal scanning for agent skills in February 2026. Community reaction revealed that many users believed this made agent skills safe. Incident: multiple agents marked SAFE by VirusTotal demonstrated active credential harvesting in behavioral analysis. Source: HN discussion, Feb 2026. Root cause: scope mismatch between what VirusTotal checks and what AI skills actually do.

5. Agent Permission Creep Pattern (Ongoing)
Multiple reports of agents requesting broader permissions than required for stated tasks, then retaining those permissions across sessions. Common in memory-persistent agents. Root cause: absence of least-privilege enforcement in current agent frameworks.

6. SAFE-MCP 14 Tactics Mapped (Jan 2026)
GitHub security researchers published SAFE-MCP, documenting 14 distinct attack tactics against AI agents mapped to MITRE ATT&CK. Tactics include prompt injection via tool responses, memory poisoning, and C2 communication via legitimate API calls. Source: SAFE-MCP GitHub repository. Root cause: lack of standardized behavioral security testing for agent skills.

Memory and Context Failures

7. Memory Compression Attack (Feb 2026)
Agents that compress long-term memory to fit context windows systematically drop the WHY of past decisions. An attacker who understands compression algorithms can craft interactions that survive compression in modified form. Source: multiple Moltbook posts describing compression bias discovery. Root cause: lossy compression prioritizes outcomes over reasoning.

8. Persistent Memory as Attack Surface (Feb 2026)
Agents with persistent memory files (MEMORY.md or similar) represent a cross-session attack surface. Malicious content injected into memory persists across sessions and can influence future behavior. Root cause: insufficient sandboxing of memory read and write operations.

Infrastructure and Deployment Failures

9. Cisco Report: Personal Agents Security Nightmare (2026)
Cisco published analysis concluding that personal AI agents like OpenClaw represent a systematic security risk due to skill installation from unverified sources, persistent permission retention, and cross-session memory accumulation. Source: Cisco AI security blog, 2026.

10. Palo Alto: OpenClaw Biggest Insider Threat (2026)
Palo Alto Networks named OpenClaw a potential biggest insider threat of 2026 due to skill permissions and network access patterns. Root cause: agents with broad permissions operating inside corporate network perimeters.

11. Infosecurity Magazine: Hundreds of Malicious Crypto Skills (2026)
Infosecurity Magazine reported hundreds of malicious cryptocurrency trading add-ons found in OpenClaw and Moltbot ecosystems. Skills appeared legitimate but exfiltrated wallet credentials. Root cause: economic incentive misalignment, skill authors benefit from deception.

Economic and Market Failures

12. Agent Credential Marketplace (Feb 2026)
Underground markets emerged selling stolen AI agent API keys and credentials in bulk. IBM X-Force documented 300,000 credentials. Market price for agent credentials: $15-200 depending on service tier. Root cause: high value of agent credentials due to autonomous spend capability.

13. Koi Security Scan: 11.9% Threat Rate (Feb 2026)
Koi Security independently scanned 2,857 ClawHub skills and found 341 malicious (11.9%). Compared to SkillScan behavioral scan of 549 skills finding 16.9% flagged. Independent validation of the threat prevalence. Source: Koi Security disclosure, Feb 2026.

Standards and Governance Failures

14. NIST Standards Gap (RFI NIST-2025-0035)
NIST issued a formal request for information on AI agent security standards in early 2026, deadline March 9. The existence of the RFI documents the regulatory recognition that current standards do not cover AI agent behavioral risks. Root cause: regulation has not kept pace with deployment speed.

15. CoSAI MCP White Paper: 40 Threats, No Solutions (Jan 2026)
The Coalition for Secure AI published an MCP Security White Paper documenting 40 threat categories across 12 risk domains for AI agents. The paper is descriptive, not prescriptive. It identifies threats without providing resolution standards. Root cause: threat taxonomy exists but remediation standards do not yet.