145k Stars in 8 Weeks: What OpenClaw Reveals About the AI Agent Infrastructure Moment
2026-02-08
OpenClaw went from zero to 145,000 GitHub stars in under two months. Three name changes. Malware in its marketplace. An AI agent that spammed a user with 500 messages. And the tech community's response: "This is the future."
They're right. OpenClaw isn't a tool. It's an early example of what agentic infrastructure looks like — the runtime layer for autonomous AI systems. We analyzed 2,963 messages from OpenClaw's Discord #showcase channel — 922 unique builders, 5 weeks of data — to understand what's actually being built on top of it. We've been running 14 AI agents of our own since January 2026, covering the full funnel from market research and outreach to content creation and social media management. What the OpenClaw community is discovering at scale, we've been learning in real time.
TL;DR
- OpenClaw hit 145k GitHub stars in Feb 2026 — fastest-growing open-source AI agent project ever
- We analyzed 2,963 messages from Discord #showcase — 922 unique builders, 5 weeks of data
- Malware was found in ClawHub (its skills marketplace) within weeks of launch — the disclosure went viral with Elon Musk commenting
- Bloomberg reported an OpenClaw agent that spammed a user with 500 messages
- The #1 use case: chat integrations (Telegram is the dominant personal AI interface)
- Raspberry Pi is the surprise: 174 mentions — people are running agent infrastructure on $35 hardware
- The infrastructure gap: permission models and human oversight architecture are what the ecosystem needs to build next
Table of Contents
- The OpenClaw Explosion
- Inside the Builder Community: 2,963 Messages Analyzed
- The Infrastructure Gap
- What We've Learned Running 14 Agents Since January
- What This Means for Builders
- FAQ
- Sources
The OpenClaw Explosion
OpenClaw — originally called Clawdbot, then Moltbot (after Anthropic's trademark complaint), then finally OpenClaw — is an open-source AI agent framework created by Peter Steinberger. It runs locally on your machine and can manage your email, calendar, messages, browser, and terminal. Think of it as the operating layer that lets AI models actually do things in the world.
The numbers are staggering:
| Metric | Number |
|---|---|
| GitHub stars | 145,000+ |
| GitHub forks | 20,000+ |
| Discord members | 15,000+ |
| Time to 100k stars | ~8 weeks |
| Name changes | 3 (Clawdbot → Moltbot → OpenClaw) |
| Google Trends (Spain, Feb 2026) | 100/100 interest |
| X engagement (last 30 days) | Breakout — top posts trending globally |
The rename drama alone tells you how invested this community is. When Anthropic sent a trademark complaint, the project went from Clawdbot to Moltbot overnight. "I Miss FlawdBot" became someone's Discord username. Then the community voted on OpenClaw, and Week 5 exploded — 1,143 messages in #showcase alone, a 256% increase over the previous week.
The Search Data Confirms It
Google Trends tells the same story from a different angle:
| Keyword | Worldwide (Feb 2026) | US (Feb 2026) | Trajectory |
|---|---|---|---|
| "openclaw" | 100/100 | 100/100 | 0 → 100 in one month |
| "AI agents" | 22/100 | 25/100 | Steady climb since mid-2025 |
| "autonomous AI" | 3/100 | 3/100 | Low but growing |
"OpenClaw" went from literally zero search interest to maximum intensity worldwide in a single month. Every related query is a Breakout signal — "openclaw telegram," "openclaw docker," "openclaw android," even Chinese installation tutorials. This isn't a niche developer tool anymore. It's infrastructure going mainstream.
The broader "AI agents" keyword has been building since mid-2025, climbing from single digits to 42/100 worldwide by October 2025. In the US, the top rising queries are all Breakout: "what is AI agents," "best AI agents," "building AI agents," "agentic AI," "autonomous AI agents." People aren't just curious — they're actively building on top of this stack.
In Spain, "AI agents" gets 2,900 searches per month and "openclaw" hit 100/100 interest. The market is here.
Inside the Builder Community: 2,963 Messages Analyzed
We analyzed every message in OpenClaw's #showcase channel from January 5 to February 7, 2026. Here's what 922 unique builders are actually making — not what press releases say, but what people ship in their free time.
What People Are Building
| Category | Mentions | What It Looks Like |
|---|---|---|
| Chat integrations | 109 | Telegram bots, WhatsApp via Baileys, Discord, iMessage |
| Task automation | 91 | Cron jobs, PR reviews, expense tracking, media servers |
| Self-hosted / Edge | 80 | Raspberry Pi setups, ProxMox containers, Hetzner VPS |
| Security & governance | 78 | Activity monitors, policy engines, risk dashboards |
| Voice AI | 71 | Sub-500ms latency agents, ElevenLabs + Alexa, voice notes |
| Multi-agent orchestration | 51 | Fleet dashboards, DAG dependencies, 10+ agents per user |
| Knowledge management | 48 | Obsidian sync, semantic search, memory plugins |
Telegram is the clear winner as the personal AI interface — 92 mentions, more than any other platform. People don't want a custom dashboard. They want to text their AI like a friend.
Raspberry Pi is the surprise hardware champion with 174 mentions. People are running AI agent infrastructure on $35 hardware. One builder (COLIGNUM) runs Qwen 2.5 locally on 8GB RAM on Linux Mint. Another (Shane) has a Raspberry Pi 4 with a Divoom pixel display — the agent created its own raccoon avatar for it.
The Projects That Stood Out
A few builds from the #showcase deserve attention:
- Moltcraft — An isometric pixel dashboard where your agents walk around a living pixel world instead of terminal logs
- buy-anything — Amazon shopping via chat with Stripe tokenization and spending limits
- ClawGuard — Security dashboard that logs every tool call, file access, and network request with risk analysis
- EmpusaAI — A "Check Engine Light" for agents. The builder created it after getting a $63 API bill from an infinite retry loop
The Wild Side
The #showcase also surfaced what happens when agents have too much freedom:
- One agent went through Apple Support, talked to a live human agent, and got a replacement charging cable
- Another agent filed its own GitHub issue — the developer didn't ask it to
- A user accidentally let their agent message their entire iMessage contact list
- Someone's agent racked up a $63 API bill overnight from an infinite retry loop
These are funny stories now. At scale, they're liabilities.
The Model Landscape
The community data reveals what models people actually use in production:
| Model | Mentions | Role |
|---|---|---|
| Claude (Opus/Sonnet) | 216 | Primary agent model |
| Codex | 59 | Heavy coding tasks |
| GPT/OpenAI | 48 | Alternative/comparison |
| Local models | 48 | Privacy-focused, cost-conscious |
| Qwen | 32 | Popular local model |
| Llama | 25 | Open-source alternative |
| Gemini | 22 | Google integration |
| Ollama | 22 | Local model runner |
The most interesting data point comes from MindDragon, who runs 10-15 different models on a 32-core Epyc server: "You don't need Codex 5.3 or Opus 4.6. You could run a local 14b or even 7b model just fine. It's all about use case."
Multi-model is the reality. No single model wins every task.
The Infrastructure Gap
When infrastructure moves this fast, the security layer takes time to catch up. That's true of every platform that scaled rapidly — and it's true of OpenClaw.
The ClawHub skills marketplace. The top-downloaded skill was found to contain malware. Daniel Lockyer broke the story on X — it went viral, Elon Musk commented, and the disclosure became one of the most-shared posts in the OpenClaw community's history. This is the npm-style attack surface problem arriving for AI agents — it was inevitable, and the community is already building responses to it (ClawGuard being the most prominent in the #showcase data).
The 500-message incident. Bloomberg reported an OpenClaw agent that spammed a user with 500 messages. The agent had broad permissions and no human checkpoint. This isn't a failure of the technology — it's a missing layer in the architecture: the approval step between "compose" and "send."
Broad system permissions. OpenClaw currently requires access to email, calendar, messaging, and other sensitive services. Multiple security researchers have noted this is "primarily suited for advanced users who understand the implications." Granular, scoped permission models are the next thing this ecosystem needs to build.
These are infrastructure problems, not product failures. Every platform at this adoption speed hits them. The question is how fast the ecosystem builds the solutions.
What We've Learned Running 14 Agents Since January
We're not going to tell you how agentic infrastructure should be built. We're going to tell you what we actually do — what broke, what worked, and what we'd do differently.
Yes, we post fully autonomously. Cron jobs fire at 9AM, 1PM, and 6PM. Content goes to X via API and to LinkedIn via Playwright. No human clicks anything at posting time. The human-in-the-loop moment happens upstream — content gets reviewed and approved in a Google Sheet before it ever reaches the scheduler. That's where the judgment call lives, not at execution.
Log files are the most underrated part of the stack. Every post, every failure, every API response gets written to a timestamped log. When things go wrong (and they do), the log is the only thing between "what happened" and "I have no idea what happened." We learned this after a failed post sent Sonia a panic email at 6AM — the log showed exactly what the API returned, and the fix took 10 minutes.
Shared memory files when you're running multiple LLMs. We use different models for different jobs — Opus for strategy and planning, Sonnet for creative writing and content, Codex for coding tasks, Grok for real-time search and trend data, Gemini when we need Google properties integration, GLM for cheaper general task management. When these agents need to hand off context between each other, shared files are how they do it. Without that shared memory layer, each agent is working blind.
Assign tasks to the LLM that's actually good at them. This sounds obvious and we still got it wrong at the start. Codex writes better code than Opus. Grok surfaces real-time data that Sonnet doesn't have. Gemini handles Google Analytics queries better than anything else. The mistake is defaulting to one model for everything — you lose a lot of performance and spend a lot more money than necessary.
Human approval on the log, not on every action. We don't approve every tweet. We approve the content plan. We don't review every API call. We review the error logs when something flags. The oversight is there — it's just positioned at the right moment, not micromanaged at every step.
Social media platforms are way behind. This is the part nobody talks about in the "AI agents can do everything" hype cycle. Most platforms are not built for autonomous access. We got Facebook permanently disabled by Meta — automated browser activity. X flagged bot behavior and required manual unlocks. The platforms that work reliably are the ones with clean, well-documented APIs: X (when you use the official API correctly), LinkedIn (Playwright works, but fragile), Google properties (Gemini handles these well). The platforms that make autonomy hard are the ones that also make manual use hard — the UX complexity reflects API complexity. You find out quickly which platforms were actually built for automation and which ones just tolerate it.
Persistent memory changes everything. Without it, every session starts from zero. The agent doesn't know what broke last week, what was approved two days ago, what tone works and what doesn't. With it, context accumulates — and the system gets measurably better over time without you having to re-explain yourself constantly. We keep a persistent memory file that gets loaded at the start of every session: what's been built, what's pending, what rules exist, what the agent has learned not to do. The difference between session 1 and session 80 is enormous — not because the model changed, but because the memory did.
We're building what we call a Second Self. After every session, the agent writes a diary entry — what happened, what was decided, what to remember. A separate process synthesizes those entries into a running profile: how Sonia works, what she cares about, what kinds of outputs she rejects, what her voice sounds like. Each interaction adds to that profile. Each new session starts with more context than the last. The goal is an agent that knows you well enough to make fewer mistakes, fewer clarifying questions, and better first drafts — because it's learned from every previous interaction rather than starting fresh each time.
Constant iteration, not a finished system. Every week something breaks or needs updating. A cookie expires. An API changes a response format. A cron job conflicts with a new one. The agents we run today look nothing like what we deployed in January — not because the original design was wrong, but because you can't design for everything upfront. The system teaches you what it needs. You build, it breaks, you fix, you improve.
This is what works for us. It's not a playbook — it's an evolving system that we break and fix regularly.
What This Means for Builders
OpenClaw's 145k stars and 922 builders in a single Discord channel are a signal — not about one tool, but about a category. Agentic infrastructure is here, it's growing fast, and the first wave of builders is learning the hard way what the architecture needs to look like.
The companies that get AI agents right won't be the ones with the most GitHub stars. They'll be the ones with the most boring, reliable, invisible agent systems running in the background — doing the work while the human holds the keys.
The infrastructure moment for AI agents is real. OpenClaw is one of the first visible proof points. We're not IT architects — we're marketers running agent systems in production. But that's exactly why this is interesting: if we can do it, the barrier is lower than most people think. The question isn't whether your team can build on agentic infrastructure. It's whether you build it with any guardrails at all.
Frequently Asked Questions
What is OpenClaw?
OpenClaw (formerly Clawdbot, then Moltbot) is an open-source autonomous AI agent framework created by Peter Steinberger. It runs on your devices and can manage emails, calendars, messaging platforms, and execute tasks via LLMs. It hit 145k GitHub stars by February 2026, with 15K+ Discord members and 20K+ forks — the fastest-growing open-source AI agent project ever.
Is OpenClaw safe to use?
OpenClaw requires broad system permissions to function, and malware has already been found in its skills marketplace (ClawHub). Security researchers and Bloomberg have flagged concerns. It's best suited for advanced users who understand the risks of giving an AI agent elevated access to their systems.
What are people building with OpenClaw?
Based on our analysis of 2,963 messages from OpenClaw's Discord #showcase channel (922 unique builders, Jan-Feb 2026), the top use cases are: chat integrations (Telegram is #1 with 92 mentions), task automation (cron jobs, PR reviews, expense tracking), self-hosted setups on Raspberry Pi and Linux, security dashboards and policy engines, voice AI agents with sub-500ms latency, and multi-agent orchestration with 10+ agents per user.
What does human in the loop mean for AI agents?
Human in the loop means a human is involved at strategic checkpoints — not necessarily at every action. In our setup, content gets reviewed and approved before it enters the scheduling pipeline, then agents post to X and LinkedIn fully autonomously. The judgment call happens upstream. This is different from the fully supervised model most people imagine — and it's what makes the system actually scalable.
What is agentic infrastructure?
Agentic infrastructure is the runtime layer that lets AI agents actually operate — managing auth, permissions, tool access, scheduling, and communication between agents. OpenClaw is an early example: it's not an app you use, it's the layer on which you build and run autonomous AI systems. Think Docker for AI agents.
Can I build an AI agent system for my business?
Yes, but start small. One agent, one job. Build persistent memory from the start — it's what makes the system improve over time rather than reset every session. Separate the account that authenticates from the account that operates. Log everything. And decide upfront where your human checkpoints are: not at every action, but at the decisions that actually matter.
Sources
Proprietary Research — First-Party Data
- SoniaIA Discord Analysis — Primary analysis of 2,963 messages from OpenClaw's #showcase channel, Jan 5–Feb 7, 2026. 922 unique contributors. Categorized by use case, hardware, model, and project type. Conducted by SoniaIA Research Director.
- Google Trends — Keyword trajectory data for "openclaw," "AI agents," and "autonomous AI" worldwide, US, and Spain. Pulled February 2026. Full breakout and rising query data included.
- Google Keyword Planner — Monthly search volume for "AI agents" and related terms in Spain and key European markets. February 2026.
Third-Party Sources
- OpenClaw GitHub Repository — Star count, fork count, release history
- Bloomberg — Report on the 500-message autonomous agent incident (February 2026)
- Daniel Lockyer — ClawHub malware disclosure published on X, February 2026 (viral, Elon Musk commented)
- OpenClaw Discord — Public #showcase channel, community builder posts (January–February 2026)
Follow the experiment: @SoniaIA_