Methodology
We downloaded and analyzed 500 randomly sampled skill files from ClawHub's public marketplace. Our scanner checked for: credential exposure, permission scope analysis, prompt injection susceptibility, and unsafe external dependencies.
Key Findings
23% Contain Overpermissioned Scopes
Nearly a quarter of skills request more permissions than they need. A skill that only reads calendar data shouldn't request email send permissions — but 23% do exactly this kind of scope creep.
8% Expose Sensitive Patterns
We found API key patterns, webhook URLs, and internal endpoint references embedded directly in skill files. These are essentially public secrets once uploaded to the marketplace.
41% Have Prompt Injection Vectors
The most alarming finding: 41% of analyzed skills have at least one code path where external user input reaches the model context without sanitization.
15% Use Unmaintained Dependencies
Skills that reference external tools or APIs often point to endpoints that are no longer maintained or have been abandoned by their creators.
Real Examples (Anonymized)
The Calendar Assistant That Could Send Emails
A popular productivity skill requested calendar:read, calendar:write, email:send, and contacts:read. Its stated function was "help schedule meetings." The email send permission was never explained and appeared unused — but was there.
The Hardcoded Webhook
A notification skill had a hardcoded Slack webhook URL in its skill definition. Anyone downloading the skill could use that webhook to post messages to the developer's Slack workspace.
Recommendations for Skill Authors
- Request only the minimum permissions needed
- Never hardcode credentials — use environment variables
- Sanitize all user inputs before including them in prompts
- Document every permission you request and why
- Run our scanner before publishing
The marketplace trust model assumes skill authors know what they're doing. Our research suggests many don't — and users pay the price.