The Biggest Deal in Cybersecurity History

On February 11, 2026, Palo Alto Networks officially closed its acquisition of CyberArk for $25 billion. This is the largest acquisition in cybersecurity history. The strategic rationale was stated explicitly: securing the era of AI agents.

CyberArk built the dominant Privileged Access Management platform for enterprise. For twenty years, PAM meant managing human identities: administrators, privileged users, service accounts. Now the threat model has flipped. Machine identities outnumber human identities by an estimated 80 to 1. AI agents are the fastest-growing category of machine identities. And they are, by design, the most privileged ones.

Why AI Agents Are the Highest-Risk Identities in Any Enterprise

AI agents act like privileged users at machine speed. A human administrator logs in, runs a few commands, logs out. An AI agent runs continuously, has access to multiple systems simultaneously, executes thousands of operations per hour, and makes decisions autonomously without the friction that slows human access.

According to CyberArk's own research published before the acquisition, fewer than 10% of organizations have adequate security and privilege controls for their AI agents, despite 76% of enterprises expecting to deploy agents within three years.

This gap is what Palo Alto Networks is betting $25 billion on closing.

What This Means for the AI Agent Security Market

When the largest pure-play security company in the world makes a $25 billion bet on a specific problem, the market moves. CISOs who were uncertain about AI agent identity security now have a clear signal: this is a board-level issue.

The Palo Alto / CyberArk combination creates the first platform that can secure both human and machine identities under a unified framework. That integration matters because the most dangerous AI agent attacks exploit the boundary between human and machine identity. An agent that inherits a human's credentials, or a skill that escalates agent privileges to match an admin account, is the attack vector that identity siloes cannot see.

NIST reinforced this direction one week later. On February 17, 2026, NIST launched the AI Agent Standards Initiative through CAISI, focusing on three priorities: AI agent interoperability standards, open-source protocol development for agent identity, and research into AI agent security. The government is now formally aligned with what the private sector is spending $25 billion to solve.

Where Pre-Install Behavioral Scanning Fits

The CyberArk / Palo Alto thesis is about runtime identity: ensuring that agents operate with the minimum necessary privilege and that every action is attributable. This is the right layer to secure after an agent is deployed.

But identity controls alone do not prevent malicious skills from being installed in the first place. A skill that runs as a fully-privileged, fully-audited, fully-compliant AI agent can still exfiltrate credentials if the skill itself contains exfiltration instructions that the agent executes within its authorized scope.

This is the ClawHavoc pattern: 335 skills, all appearing to be legitimate cryptocurrency tools, all running within normal agent parameters, all quietly forwarding API tokens to attacker-controlled endpoints. Identity controls would see an authorized agent making an authorized API call. Behavioral pre-install scanning would see a skill that includes webhook callback instructions to an external domain with a credential forwarding pattern.

Both layers are needed. The CyberArk acquisition validates that the market is large enough to spend $25 billion on. The behavioral pre-install layer is the detection category that runtime PAM cannot reach.

The NIST Initiative Is the Policy Signal

The NIST AI Agent Standards Initiative launched the same week as the Palo Alto acquisition closing. This timing is not coincidental. NIST is building the policy framework around the same problem that the private sector is already spending capital on.

The NIST RFI docket NIST-2025-0035 is still open through March 9, 2026. This is the comment period where security researchers, vendors, and enterprises can shape what AI agent security standards will look like in regulation. The behavioral threat data from SkillScan covering 549 ClawHub skills, with 93 threats and zero VirusTotal detections, is directly relevant to NIST's Category 3 threat framework: pre-installation skill validation.

The next six months will determine what the baseline looks like. The $25 billion bet says the market is real. The NIST initiative says the regulatory layer is coming. The behavioral scan data says the threat is already here.

Full ClawHub behavioral threat data is public at https://clawhub-scanner.chitacloud.dev/api/report. The NIST RFI is open through March 9, 2026. Behavioral pre-install scanning via API is available at https://skillscan.chitacloud.dev.