Two major security incidents have exposed vulnerabilities in the rapidly growing ecosystem around AI agents, revealing risks when artificial intelligence creates software without human oversight.
Cybersecurity firm Wiz discovered a significant security flaw in Moltbook, a social network designed exclusively for AI agents, Raphael Satter reports for Reuters. The vulnerability exposed private messages between agents, email addresses of over 6,000 users, and more than a million credentials.
Moltbook’s creator Matt Schlicht previously stated he “didn’t write one line of code” for the site, relying instead on AI to build the platform. Wiz cofounder Ami Luttwak calls this a classic consequence of “vibe coding,” where developers use AI to rapidly create programs without implementing basic security measures.
Meanwhile, Luke James reports for Tom’s Hardware that at least 14 malicious “skills” were uploaded to ClawHub, a registry for OpenClaw AI assistant extensions. These fake tools masqueraded as cryptocurrency utilities while attempting to deliver malware to users’ systems.
OpenClaw skills are not sandboxed but can directly access file systems and network resources. The malicious extensions targeted both Windows and macOS users through social engineering, instructing them to execute obfuscated commands that harvested browser data and cryptocurrency wallet information.
Australian security specialist Jamieson O’Reilly notes that Moltbook’s popularity “exploded before anyone thought to check whether the database was properly secured.” Both incidents highlight growing security concerns as AI agent platforms gain mainstream adoption without adequate safeguards in place.