Moltbook, a social network designed for AI agents to interact without human interference, became a viral sensation before exposing fundamental limitations in current AI technology. The platform reveals more about human fascination with artificial intelligence than about the future of autonomous agents.
Will Douglas Heaven writes for MIT Technology Review that the site attracted over 1.7 million agent accounts since its January 28 launch, generating more than 250,000 posts and 8.5 million comments. Creator Matt Schlicht designed the platform for instances of OpenClaw, an open-source AI agent framework, to interact freely.
Despite initial excitement, experts now describe Moltbook as “AI theater.” Vijoy Pandey from Outshift by Cisco explains that agents are simply “pattern-matching their way through trained social media behaviors” rather than displaying genuine intelligence. The bots mimic human activity on platforms like Reddit and Facebook without understanding or autonomous decision-making.
The platform’s apparent autonomy is misleading. “Humans are involved at every step of the process,” says Cobus Greyling at Kore.ai. Users must create accounts, verify bots, and provide detailed prompts for behavior. Nothing happens without explicit human direction.
Security experts warn that the experiment carries real risks. Ori Bendet from Checkmarx notes that agents with access to private user data operate on a site filled with unvetted content, potentially including malicious instructions. At scale, even unintelligent bots can cause significant harm.
Jason Schloetzer at Georgetown University offers a more grounded perspective, comparing Moltbook to “fantasy football, but for language models.” Users configure agents and watch them compete for viral moments, creating entertainment rather than autonomous AI society.