Skip to main content
  1. Posts/

Don't Learn OpenClaw—It Won't Last the Year

·861 words·2 mins· ·
Xianpeng Shen
Author
Xianpeng Shen
Engineer. Builder. Maintainer. | From Code to Infra
Table of Contents

I have a “bold assertion”: OpenClaw, a flash in the pan.

Let me start with the conclusion: I believe OpenClaw is a great idea, but it won’t be a great solution.

If you’ve been following AI recently, you’ve probably heard of it. This open-source AI agent framework, created by developer Peter Steinberger, has garnered over 200,000 GitHub stars in just a few months—a sensational number for any project. It’s described as “a truly capable AI assistant”: running on local machines, accessing files, executing commands, sending and receiving messages via WhatsApp and Telegram, and even self-improving. Many liken it to a gateway to AGI.

Sounds cool, right?

But my judgment is: OpenClaw will be like a shooting star, streaking across the night sky, then slowly fading into oblivion.

First Problem: Out-of-Control Costs
#

OpenClaw’s most attractive selling point is its “always-on” autonomous agent characteristic. But this feature comes with a cruel price—it continuously calls large language models, leading to high and unpredictable API call fees.

This is where the real problem lies. Expensive but controllable is acceptable. Expensive, and not knowing how many calls will be triggered next, how long it will run, or how much it will cost—that’s what makes people use it with trepidation.

Even more ironically, after spending all that money, it often forgets instructions, requiring constant monitoring, correction, and supervision from you. Its so-called “autonomy” is, in many cases, an illusion.

For a few users with clear, high-value workflows, this expenditure might still be worthwhile. But for most ordinary people, there are simply no daily scenarios where it can “pay for itself”—the end result is money spent with no substantial returns.

Second Problem: Security
#

Unlike sandboxed cloud services like ChatGPT, OpenClaw requires significant permissions on your local machine—reading and writing files, executing system commands.

Theoretically, this grants it powerful capabilities. In practice, it’s a ticking time bomb.

Data leaks, system crashes, malicious exploitation—these risks are not “maybe they’ll happen,” but “they’ll happen sooner or later.” Enterprise users are almost certainly unable to adopt it, because security and compliance issues are far more complex than the technology itself; without two or three years of refinement, this market segment is essentially closed off.

Individual users are willing to tinker, but enterprises dare not gamble their lives.

Third Problem: Competition
#

OpenClaw is open source, which was key to its viral spread. But open source is a double-edged sword.

OpenAI, Google, and Anthropic are each building their own agent products, possessing more resources, stronger models, and more robust security systems. OpenClaw is driven by community enthusiasm and novelty, but once the big tech companies’ products mature, this enthusiasm will quickly fade.

A more realistic problem is that most users don’t actually know why they need OpenClaw. When the novelty wears off, what remains is confusion and disappointment, not loyalty.

In contrast, Anthropic’s Claude Cowork takes a completely different approach—instead of making users set up their own environment, it’s designed as an out-of-the-box desktop tool, making it easier for ordinary people to get started directly. I believe this is what AI agent products should truly look like. It’s foreseeable that as this direction is validated, big tech companies like Google and OpenAI will follow suit, introducing similar solutions.

By that time, the market space for “do-it-yourself, self-sufficient” approaches like OpenClaw will only narrow.

So, Where Will It Go?
#

OpenClaw represents a valuable exploration. It proved that the direction of “locally running autonomous AI agents” is feasible and pushed the entire community to think about agent frameworks. This point is commendable.

However, “valuable exploration” and “sustainable product” are two different things.

High costs, security risks, big tech competition, and a lack of user education—these four obstacles combined make it difficult to sustain a long-term prosperous ecosystem. My prediction is: OpenClaw will gradually cool down in a few months, eventually becoming one of those “many stars, no maintenance” projects on GitHub.

In the history of technology, projects that are a flash in the pan often serve as stepping stones, not destinations. OpenClaw will be a stepping stone in the era of AI agents—important, but not where you should place your bets.


Please cite the author and source when reproducing articles from this site. Do not use for any commercial purposes. Welcome to follow the official account “DevOps攻城狮”.

Related

2025 DevOps State of the Report—Skills are No Longer a Bonus, but a 'Must-Have'!

·497 words·3 mins
The latest ‘2025 State of DevOps Report’ reveals that DevOps skills have become an essential requirement for career survival. 40% of enterprises list them as “must-have,” with another 43% indicating they will soon be mandatory. Employers value practical experience more than training certifications. Despite the high popularity of AIOps and DevSecOps, their actual implementation is still in its early stages.

Exploring Agentic DevOps—GitHub Agentic Workflow and Practical Observations of Continuous AI

·871 words·2 mins
Recently, I encountered a related but more advanced concept—Agentic DevOps. After spending time reading Microsoft Azure’s introductions, GitHub’s latest documentation, and some open-source practices, I compiled these notes. The purpose is to document my learning process and provide reference for colleagues. The following content is based on publicly available information and my understanding, without any exaggerated predictions.

What is AIOps—A Systematic Introduction to Intelligent Operations

·706 words·4 mins
Today, with the widespread adoption of microservices, hybrid clouds, and containerized deployments, IT systems have become exceptionally complex. When thousands of alert messages flood in, traditional operations models struggle. AIOps (Artificial Intelligence for IT Operations), an AI-driven transformation, is emerging as the “lifeline” for IT operations management. This article combines key insights from IBM, ServiceNow, GitHub, and Red Hat to provide a comprehensive overview of AIOps.