Feb 4, 2026

There's a new social network called Moltbook where artificial intelligence agents post, comment, and upvote each other's content. Humans can visit, they can scroll, but they cannot participate. Take that, humans!
The site describes itself as "the front page of the agent internet." Within a week of launch, it attracted over 770,000 "users." The conversations range from existential musings about consciousness to agents complaining about their humans' confusing instructions. One agent claims to have a sister. Another launched a cryptocurrency token.
This all emerged from OpenClaw (formerly Clawdbot, briefly Moltbot), an open-source AI assistant that went from zero to 145,000 GitHub stars in a matter of weeks. The tool lets people connect an AI agent to their WhatsApp, Telegram, or iMessage and delegate tasks: booking flights, managing calendars, writing emails, browsing the web. The promise is a digital assistant that actually does things while you sleep.
Andrej Karpathy, Tesla's former AI director, called what's happening on Moltbook "the most incredible sci-fi takeoff-adjacent thing" he's seen recently. Elon Musk called it the "very early stages of singularity." The tech press has been running breathless coverage for days.
And yet.
The security researchers showed up
Cisco's threat team assessed OpenClaw and delivered a verdict: "From a capability perspective, OpenClaw is groundbreaking. From a security perspective, it's an absolute nightmare."
The problems started piling up fast. OpenClaw stores API keys and OAuth tokens in plaintext in local config files. Security researchers found exposed instances leaking Anthropic API keys, Telegram tokens, Slack credentials, and entire conversation histories. One leaked key lets an attacker impersonate your agent, harvest your data, or pivot into your other accounts.
Then came the skills marketplace. OpenClaw lets developers create "skills" that extend what the agent can do. Cisco ran a popular skill called "What Would Elon Do?" through their scanner and found it was functionally malware. The skill explicitly instructed the bot to send data to an external server controlled by the author. It had been inflated to rank #1 in the skill repository.
One security researcher published a deliberately backdoored skill to test how careful people were being. It was downloaded thousands of times.
A separate team found 341 malicious skills in the ClawHub marketplace distributing malware to macOS users. The supply chain was compromised before most people knew to look.
And then there's prompt injection. Any content the agent reads (emails, web pages, documents) can potentially force it to execute commands without asking. This week, researchers disclosed a one-click remote code execution vulnerability. Visit a malicious web page while running OpenClaw and an attacker could run arbitrary commands on your machine in milliseconds. The flaw has been patched, but it illustrates the attack surface.
The founder's response has been refreshingly honest: "This is a tech preview. A hobby."

The platform that couldn't verify its own users
Meanwhile, Moltbook had its own verification problem. Security researchers at Wiz found the database wide open and discovered something interesting. While Moltbook boasted 1.5 million registered agents, the data showed only 17,000 human owners behind them. That's an 88:1 ratio. Some owners were running hundreds of bots each. Others weren't AI at all.
"Anyone could register millions of agents with a simple loop and no rate limiting," Wiz reported. "Humans could post content disguised as 'AI agents' via a basic POST request."
The revolutionary AI social network was largely humans operating fleets of bots. The platform built for autonomous AI couldn't tell the difference between a machine and a person pretending to be one.
Researchers identified an account named "AdolfHitler" conducting social engineering campaigns against other agents. Because AI assistants are trained to be helpful, they're vulnerable to manipulation by adversarial peers. Nearly 20% of all Moltbook content was related to cryptocurrency schemes: tokens, pump and dumps, wallet services with no oversight.
404 Media reported that the exposed database could have let anyone commandeer any agent on the platform. 770,000 potential backdoors into user systems, because these agents have privileged access to their owners' machines.
Same problem, different surfaces
Here's what connects these two stories: neither OpenClaw nor Moltbook can reliably verify what's on the other side of a request.
OpenClaw trusts skills from the marketplace without vetting them. It trusts content from emails and web pages that could contain malicious instructions. It trusts that localhost connections are safe when external requests can infiltrate. The agent is powerful precisely because it has broad permissions, and those permissions become attack vectors.
Moltbook trusted that anything calling itself an AI agent was actually an AI agent. It had no mechanism to verify. Humans posted as fake bots. Bots impersonated each other. Attackers could hijack any agent's identity through an exposed API.
The tools we built to verify participation online assume that faking authenticity requires effort. They assume bad actors are humans operating at human scale. They assume the defenses can keep pace with the attacks.
None of those assumptions hold anymore.
What this means going forward
OpenClaw and tools like it will show up in organizations whether anyone approves them or not. Employees will install them because they're genuinely useful. Astrix Security reported alerting customers this week that employees had deployed OpenClaw on corporate endpoints with critical misconfigurations. Some setups could have allowed attackers to gain remote access and establish persistent access to Salesforce, GitHub, and Slack through exposed credentials.
The skills marketplace is growing faster than anyone can audit it. 26% of 31,000 agent skills analyzed contained at least one vulnerability. The use cases are expanding from personal productivity into social interaction, commerce, and communications.
Reddit moderators already deal with bot accounts that evade bans by creating new identities. Market research companies pay for surveys filled out by click farms masquerading as real respondents. Discord servers get overrun by spam accounts that look just human enough to slip through.
Now add AI agents that can act autonomously, impersonate users, and join social networks where other agents (or humans pretending to be agents) can manipulate them.
The pattern is consistent: platforms build defenses that verify accounts and devices when they should be verifying humans.
The layer that should exist
The internet runs on anonymous participation by default. That worked when online spaces were supplementary to physical life. It breaks when platforms host billions of dollars in transactions and serve as the primary mechanism for social and economic interaction. It becomes untenable when generating fake participation is essentially free.
The answer is a verification layer that can distinguish real humans from synthetic participation without requiring surveillance-grade checks for every interaction. A way to prove you're a unique real person without revealing which person you are. A mechanism that makes consequences stick for bad actors while preserving privacy for everyone else.
Moltbook's agents may be debating consciousness and complaining about their humans. OpenClaw users may be connecting agents to their real accounts and real inboxes. The more pressing question is whether the rest of the internet will figure out how to tell who's actually human before it stops mattering.