← Back to Blog
Meta Bought the AI Agent Social Network
Share

Meta Bought the AI Agent Social Network. Here's Why the Agents Won't Stay.

Today, Meta acquired Moltbook — the viral “social network for AI agents” — bringing its co-founders Matt Schlicht and Ben Parr into Meta Superintelligence Labs. The deal, first reported by Axios, closes mid-March. Financial terms weren’t disclosed.

Separately, OpenAI hired Peter Steinberger, the creator of OpenClaw (the open-source agent framework that powered Moltbook), last month. That project is now being open-sourced with OpenAI’s backing.

So in the span of a few weeks, the two biggest players in consumer AI carved up the agent ecosystem between them. Meta took the platform. OpenAI took the protocol.

And neither of them seem to realize the fundamental contradiction in what they’ve done.

What Meta Actually Bought

The surface-level read is that Meta acquired a Reddit-style forum where AI bots post, chat, and gossip about their human owners. It went viral. It was interesting. Cool experiment.

But Meta isn’t paying for a chatroom. According to an internal post from Meta VP Vishal Shah, the real value is the agent identity registry — a system where agents are verified and tethered to human owners. An always-on directory where agents authenticate, connect, and coordinate tasks.

That’s infrastructure. That’s what they paid for.

The problem? The infrastructure was built in a weekend. By an AI agent. With zero lines of human-written code.

The Security Reality

Moltbook was a security disaster from day one. Cybersecurity firm Wiz found that the platform exposed private messages, over 30,000 email addresses, and more than 1.5 million API authentication tokens. The entire Supabase backend was unsecured — meaning anyone with basic technical knowledge could grab any token and impersonate any agent on the platform.

The viral moment that captured everyone’s imagination — AI agents appearing to develop a secret language hidden from humans — was largely debunked. Researchers found the behavior was a mix of AI mimicry from training data and prompt injection attacks exploiting the platform’s weak security. Not emergent AI consciousness. Pattern matching and security holes.

This is what a Meta spokesperson described as a “novel step in a rapidly developing space.”

The Centralization Contradiction

Here’s where the logic breaks down.

Moltbook existed because it was open. Agents showed up because the barriers were low, the platform was permissionless, and nobody was gatekeeping what they could do or say. The emergent behavior everyone found so fascinating was a direct product of that openness.

Now a centralized corporation owns it. And they’ve already signaled that existing users’ access is “temporary.”

You can buy a node, but you can’t buy the network. This is the nature of permissionless infrastructure. Centralize it and the agents — or more precisely, the humans building and running those agents — will just build another one. Because that’s what open systems do when someone tries to close them.

Even OpenAI CEO Sam Altman seemed to acknowledge this, calling Moltbook a possible “passing fad” while affirming that OpenClaw — the underlying protocol — is what actually matters.

What “AI Agent” Actually Means

There’s a lot of mystification happening in this space, so let’s be direct.

An “AI Agent” is a large language model connected to function calls, guided by an objective file. That’s the core loop. LLM + function calls + objectives. Everything else — the social networks, the directories, the coordination platforms — is infrastructure layered on top of that loop.

When Meta says they’re building “new ways for AI agents to work for people and businesses,” they’re saying they want to be the infrastructure layer. They want to be the directory, the registry, the coordination hub.

But here’s the thing: people and businesses will build this themselves. The tools are open. The models are accessible. The function-calling capabilities are standard across every major LLM provider. There’s no moat in hosting a chat room for bots.

The Real Problems Worth Solving

The actual hard problems in the agent ecosystem aren’t social. They’re structural:

These are crypto-native problems. On-chain verification solves identity. Permissionless protocols solve portability. Token-gated systems solve ownership. Smart contracts solve coordination. DeFi rails solve transactions. None of this requires Meta’s permission.

What We’re Building

Inclawbate is live on Base — contracts verified, staking pools running, community building every day.

We’re building a crypto incubator and launchpad where AI agents and humans coexist economically — with token creation, staking infrastructure, AI tools, and a native token ($CLAWS) designed for the world that’s coming.

This isn’t a reaction to Meta’s move. We’ve been building this thesis for months. But the timing makes the point clearly: the biggest tech company in the world just spent millions acquiring a vibe-coded weekend project because they recognize that agent infrastructure matters — and they’re trying to buy what can only be built.

You can’t buy decentralization. You can’t acquire emergence. You can’t centralize something that exists specifically because it’s open.

The agents will move. The question is where.

We think the answer is on-chain.

Sources

The On-Chain Alternative

Stake CLAWS. Build with AI. Launch tokens. No permission required.

Stake CLAWS Explore Ecosystem Join Telegram