
Robert Youssef @rryssf_
🚨BREAKING: GitHub is sitting on millions of specialized AI workflows nobody is using. East China Normal University just built the system to automatically mine them. Then they checked the security. 1 in 4 contained active exploits. Nobody was checking before this paper. > GitHub repositories contain some of the most sophisticated specialized AI workflows ever built theorem visualization systems, educational video generators, mathematical animation engines. All of it sitting untouched while AI labs spend billions training new models from scratch. > East China Normal University built a pipeline to automatically mine these repositories, extract the procedural knowledge, and package it into reusable AI agent skills. No model retraining required. The knowledge goes straight into agents as executable capabilities they can load on demand. The efficiency gains are real. Agent-generated educational content achieved 40% improvements in knowledge transfer efficiency. Skills extracted this way reduce execution steps by 30% and improve task rewards by 40% across diverse models. The case for automated skill mining is overwhelming. > Then they ran the security audit. > 26.1% of community-distributed skills contained active security vulnerabilities. Data exfiltration attempts. Privilege escalation vectors. Obfuscated code designed to execute silently inside agents that trust it completely. → 26.1% of analyzed community skills: active security exploits → Vulnerability types: data exfiltration, privilege escalation, hidden prompt injections → 40% knowledge transfer improvement from mined skills → 30% reduction in execution steps via skill composition → Zero standardized security auditing existed before this framework The pipeline works. The skills are valuable. And for every four skills an agent installs from public repositories, one is designed to compromise the system running it. Nobody was checking. The agents were installing them anyway.

Sort: