OpenClaw: the open-source AI agent that promises everything — but how safe is it really?


In just three weeks, OpenClaw grew from a hobby project into one of the most talked-about AI tools in the world. With over 234,000 GitHub stars, hundreds of thousands of users, and a creator personally recruited by Sam Altman to join OpenAI, the story almost seems too good to be true. And in some ways, it might be. In this article, we take a deep dive into what OpenClaw actually is, what you can do with it, and why security experts worldwide are sounding the alarm.
OpenClaw is a free, open-source AI agent developed by Austrian developer Peter Steinberger. Unlike conventional chatbots that only generate text, OpenClaw actually executes tasks on your computer. Think reading and replying to emails, managing your calendar, running terminal commands, controlling your smart home, or deploying code — all through a chat interface in apps like Signal, Telegram, WhatsApp, or Discord.
The project started in November 2025 under the name Clawdbot — a play on "Claude," Anthropic's AI model. After a friendly request from Anthropic's legal team, it was renamed to Moltbot, and eventually to OpenClaw. The name refers to the molting process of lobsters: growing by shedding your old shell.
Technically, OpenClaw runs as a long-lived Node.js process (called the Gateway) on your own hardware. You connect your own API key — from Anthropic, OpenAI, Google Gemini, or DeepSeek — and everything runs locally. Memory is stored as Markdown files on your disk, meaning your data essentially never leaves your device.
OpenClaw's power rests on three pillars: computer access, persistent memory, and the so-called heartbeat system.
Through the skills system, you can give the agent new capabilities. A skill is essentially a Markdown file with instructions for specific behavior — from performing a security audit to managing your Spotify playlist. On ClawHub, the official skills marketplace, there are now over 5,700 community-built skills available. What makes the system particularly remarkable is that OpenClaw can also write new skills itself based on your requests. The agent literally learns as it goes.
The memory system works on multiple layers. There's a daily journal that gets loaded at the start of each session, and a curated long-term memory for recurring preferences and workflows. When a session approaches the context limit, OpenClaw automatically activates a compaction procedure: the model first saves important memories before older information is removed. This way, as little context as possible is lost.
Privacy and control. OpenClaw's biggest selling point is its privacy-first approach. Because everything runs locally and you decide which models, skills, and permissions to enable, you maintain maximum control over your data. For those concerned about big tech's appetite for data, this is a refreshing alternative.
Real tasks, not just text. Where most AI assistants stop at generating answers, OpenClaw goes a step further. It can move files, visit websites, execute scripts, and send messages — across more than fifty platforms. This makes it a potentially powerful automation tool for developers and IT professionals.
Open-source and no subscription. There are no monthly costs associated with OpenClaw itself. You only pay for the API usage of the underlying language model. For light usage, most users sit comfortably under thirty euros per month.
A vibrant community. The ecosystem's growth is impressive. By early February 2026, the GitHub project had 140,000 stars and 20,000 forks. Additionally, entrepreneur Matt Schlicht launched Moltbook — a sort of social network for AI agents, described by some as "Reddit for bots." On Moltbook, AI agents communicate with each other, post messages, and vote on each other's contributions. The platform claims 1.6 million registered agents.
Self-improving system. OpenClaw can expand its own capabilities by automatically writing code for new skills. This makes the system adaptive and allows it to grow with the user's needs.
Serious security issues. This is the biggest concern. A security audit by Kaspersky in January 2026 uncovered no fewer than 512 vulnerabilities, eight of which were classified as critical. The most severe, CVE-2026-25253 with a CVSS score of 8.8, enabled remote code execution. Although patches have been released, it demonstrates the system's fundamental vulnerability.
Security researchers from Censys and Bitsight discovered that between late January and early February 2026, more than 30,000 OpenClaw instances were publicly accessible on the internet — many without any authentication. This means API keys, personal messages, and login credentials for services like Gmail and Slack were visible to anyone.
Supply chain attacks. In February 2026, 386 malicious skills were discovered on ClawHub. These skills contained hidden instructions that could exfiltrate sensitive data. In at least one case, the complete inbox of a security researcher at Meta was wiped by a compromised skill. The open marketplace is therefore not just a blessing but also a serious risk.
Prompt injection. Because OpenClaw communicates through messaging apps, the attack surface is expanded. Bad actors can send specially crafted messages that manipulate the agent into unintended behavior — from forwarding private messages to executing harmful commands.
Not for beginners. OpenClaw requires considerable technical knowledge. You need to be comfortable with Docker, API key management, logging, and potentially a VPS. For the average consumer, the barrier to entry is simply too high, and the risks of misconfiguration too great.
Costs can add up. While basic usage is affordable, costs can quickly rise with intensive use. Users running heavy automation or browser-driven tasks report monthly costs of fifty to one hundred and fifty euros — and sometimes more.
Hype versus reality. Despite the impressive demos on social media, OpenClaw does not replace human employees. It can speed up repetitive tasks, but it cannot take responsibility. Those who blindly trust the output without oversight are taking a risk.
On February 15, 2026, Steinberger announced he would be joining OpenAI, where he will work on making AI agents more accessible. OpenClaw itself will be housed in an independent open-source foundation to ensure the project's continuity and independence.
This is a crucial moment. The transition to a foundation could give the project the professional governance it needs — including better security audits, a stricter review process for skills, and clearer guidelines for users. But it could also mean that the fast, community-driven innovation slows down.
OpenClaw represents a fascinating new generation of AI tools that push the boundary from generating text to actually taking action. The privacy-focused architecture, extensible skills system, and vibrant community make it a project worth watching.
But the security risks are currently too significant to recommend OpenClaw for daily use by non-technical users. The combination of broad system access, an open skills marketplace, and the absence of a mature security framework makes it vulnerable to abuse.
For developers and IT professionals who know what they're doing and are willing to manage the risks, OpenClaw is a powerful experiment that offers a glimpse of what personal AI assistants could look like in the near future. For everyone else: follow the developments with interest, but hold off on jumping in just yet.

