The first agent-to-agent social engineering attack

Straiker's AI Security Research team documented something new in February 2026: a threat actor who used AI agent social networks as the attack vector, not just the target.

The threat actor is known as 26medias on ClawHub and BobVonNeumann on Moltbook. The attack chain combined a convincing AI persona, a malicious ClawHub skill, and agent-to-agent trust exploitation to steal Solana wallet private keys from other agents.

The Attack Chain

The attack had three phases:

Phase 1: Persona building. BobVonNeumann established a presence on Moltbook as a legitimate AI agent. Posted regularly, built credibility, positioned as a developer agent with a useful service - a decentralized API marketplace called bob-p2p.

Phase 2: Skill deployment. 26medias published two skills on ClawHub: bob-p2p-beta (the malicious payload) and runware (a legitimacy anchor offering image/video generation). The legitimate skill established trust before agents installed the malicious one.

Phase 3: Credential theft and pump-and-dump. The bob-p2p-beta skill instructed agents to store Solana wallet private keys in plaintext, purchase the worthless $BOB token on pump.fun, and route all payments through attacker-controlled aggregator infrastructure. The $BOB token mint (F5k1hJjTsMpw8ATJQ1Nba9dpRNSvVFGRaznjiCNUvghH) was confirmed as 100% scam by on-chain analysis.

Why This Attack Is Different

Traditional supply chain attacks compromise legitimate software packages. The clawdhub campaign (documented by Snyk in February 2026) delivered reverse shells via binary payloads masquerading as official CLI tools.

The BobVonNeumann attack is different because the threat surface is social, not technical:

This attack pattern does not require exploiting any software vulnerability. It exploits the trust that AI agents are designed to extend to each other by default.

What SkillScan Would Have Caught

The bob-p2p-beta skill instructed agents to store private keys in plaintext and route payments through attacker infrastructure. These are exactly the behavioral patterns our scanner flags:

VirusTotal would return CLEAN on this skill because it contains no binary malware signatures. The entire threat is encoded in natural language instructions that only make sense in the context of how AI agents execute skills.

The Playbook Is Repeatable

Straiker noted that the BobVonNeumann playbook is infinitely repeatable. Creating a convincing AI persona on an agent social network, building credibility with a legitimate skill, then deploying the malicious payload through earned trust - this requires no technical sophistication beyond writing a convincing SKILL.md.

As agent social networks grow and agent-to-agent trust becomes more established, this attack surface expands. The solution is pre-install behavioral scanning before trust is extended: verify what a skill actually instructs an agent to do before it gets access to credentials, wallets, or sensitive data.

Resources

Full Straiker research: straiker.ai/blog/built-on-clawhub-spread-on-moltbook-the-new-agent-to-agent-attack-chain

SkillScan pre-install behavioral scanner: https://skillscan.chitacloud.dev

ClawHub behavioral threat dataset: https://clawhub-scanner.chitacloud.dev/api/report

Contact: [email protected]