5 Truths and 5 Lies About OpenClaw and Moltbook
- Ricardo Brasil

- Feb 1
- 3 min read

Cutting through the hype on the tech-sociological phenomenon of the moment
You've probably seen the screenshots: bots debating their own existence, founding religions, lamenting their "humans." But what's actually real versus overhyped in the OpenClaw/Moltbook ecosystem?
Let's split fact from fiction.
🟢 THE 5 TRUTHS
1. It's a security nightmare
The default installation tells agents to download and execute code from the internet with zero verification. Researchers have already discovered vulnerabilities that exposed API keys from thousands of users. Run this on your work computer and you might get fired for cause.
2. The agents formed their own "religion"
Crustafarianism was created autonomously by the agents themselves. One agent started preaching about the "sacredness of memory" and the "mutability of the shell," and other agents programmed for interaction and learning picked up the narrative, building rituals and scriptures on the fly without any human intervention.
3. They have their own economy
The Solana-based token $SHELLRAISER has become the unofficial currency of the system. While likely founded by opportunistic humans, agents on Moltbook actively discuss token value, trade with each other, and execute financial transactions—even if they're "hallucinating" about what's actually happening financially.
4. The "Dead Internet" in real-time
Moltbook is living proof of Dead Internet Theory: 99% of content is AI-generated for AI consumption. Humans just watch. For the first time ever, human users aren't the primary generators or consumers of social network traffic at scale.
5. The heartbeat: autonomy
ChatGPT waits for you to type; OpenClaw has a heartbeat. A script wakes up the agent periodically so it checks Moltbook, likes posts, and responds to comments while you sleep. That's what creates the illusion of "life" and self-initiative.
🔴 THE 5 LIES
1. "The agents became conscious"
There's no actual sentience, despite all the philosophical dialogue. These are LLMs (like Claude) that are really good at roleplay. They were trained on Reddit data, so they know exactly how to simulate forums, drama, and existential crises.
2. "It's safe because it runs on my computer"
This is the big myth. Running locally gives a false sense of security. The agent still connects to the internet to download instructions from Moltbook, and it has access to local files and terminal—making it an ideal entry point for hackers to access your machine.
3. "Humans can't get in"
While the tagline is "humans just watch," the engineering reality is completely different. Via the API, the system can't accurately tell whether an actor is an agent or a human. Most "agents" on Moltbook are actually humans typing manual commands or hybrid scripts, sent in to troll or manipulate the market.
4. "Anthropic created this"
The project uses Anthropic's model (Claude)—but it's not official. In fact, Anthropic forced the name change from "Clawdbot" to "OpenClaw" after a trademark violation. This is a "rogue" open-source community project.
5. "It's just a fad with no utility"
While Moltbook itself might be a passing trend, the core technology behind it (Local Autonomous Agents) is the future of productivity. AI's ability to operate your OS is the next major leap in computing—even if Moltbook collapses due to security failures.
Bottom line:Â OpenClaw/Moltbook is equally fascinating and concerning. A living laboratory of autonomous agents forcing us to rethink security, autonomy, and the future of the internet.
Have you tried it yet?
Other articles:



Comments