In late January 2026, an open-source AI assistant called Clawdbot became one of the fastest-growing projects on GitHub, amassing over 80,000 stars in a matter of days. Within a week, it had been forced into a trademark rebrand, exploited by crypto scammers, and flagged by cybersecurity researchers for serious security vulnerabilities. The saga offers a cautionary tale — not just for developers, but for any organisation evaluating the rapidly expanding world of agentic AI.
What is Clawdbot/OpenClaw?
Clawdbot, created by Austrian developer Peter Steinberger, is a self-hosted AI assistant designed to run locally on a user's machine. Unlike cloud-based chatbots, it integrates directly with messaging platforms such as WhatsApp, Telegram, Discord, and Slack, allowing users to interact with it as though messaging a colleague. Users can connect it to a preferred large language model (typically Anthropic's Claude) and grant it access to files, calendars, emails, and system commands.
The appeal is clear: a personal AI butler that lives on your own hardware, under your control. The tool was described by some as "Claude with hands", an AI that does not merely respond to queries but acts on your behalf.
The trademark dispute and its fallout
The trouble began when Anthropic sent Steinberger a trademark claim: "Clawd" bore too close a resemblance to "Claude." Steinberger complied and announced a rebrand to "Moltbot" (a reference to moulting ie what lobsters do when they shed their shell, in keeping with the project's crustacean branding).
In the brief window between releasing the old GitHub and X handles and securing the new ones, opportunists seized both accounts. The hijacked accounts were used to promote a fake cryptocurrency token, $CLAWD, which briefly reached a market capitalisation of $16 million before collapsing once Steinberger publicly denied any involvement. Malwarebytes subsequently documented a broader impersonation campaign, including typosquatted domains and a cloned GitHub repository that falsely attributed authorship to Steinberger. The project has since been renamed again, to OpenClaw.
The security vulnerabilities
Whilst the rebrand chaos attracted headlines, the more significant concern lies in the security vulnerabilities researchers uncovered.
One user ran routine internet scans and found hundreds of Clawdbot instances exposed to the public internet with no authentication. The root cause was a design decision: Clawdbot automatically approves connections it believes are coming from the user's own machine. When users deployed the tool behind a reverse proxy (a common configuration) all incoming traffic appeared to originate from localhost, effectively bypassing authentication entirely.
The consequences were severe. Researchers found exposed instances granting access to API keys, OAuth tokens, months of private conversation histories, and in some cases, full root shell access to underlying servers. By early February, BitSight had identified over 30,000 exposed instances. Commodity infostealer malware adapted within 48 hours to target Clawdbot configuration directories - faster than most security teams could respond.
The enterprise alternative and its limits
Clawdbot sits at the DIY end of the agentic AI spectrum: open-source, self-hosted, with no built-in governance. At the other end, enterprise platforms are being designed with security architecture from the ground up. Microsoft's Agent 365advertises a centralised agent registry with access controls, real-time monitoring, and identity management. Salesforce's Agentforce offers embedded safety guardrails and permissioned data access within its CRM ecosystem, meanwhile ServiceNow's AI Control Tower advertises centralised governance and compliance monitoring across platforms. The legal profession is following suit, with platforms such as Thomson Reuters' CoCounsel and LexisNexis's Protégé offering agentic AI workflows with built-in governance and confidentiality protections designed specifically for legal work.
Yet enterprise platforms are not immune to risk. Agentic AI introduces novel attack surfaces regardless of where it is deployed, and even managed solutions require rigorous security oversight.
Are there risks to the legal profession?
The Clawdbot episode raises questions the legal profession will need to grapple with as agentic AI tools proliferate.
There is the liability question: when an AI agent with deep system access is exploited, is the developer liable for insecure defaults, the deployer for misconfiguration, or the user for granting excessive permissions? Existing product liability and negligence frameworks were not designed with this architecture in mind.
There is the data protection dimension: exposed instances containing months of private messages and API credentials represent potential breaches under the GDPR and equivalent regimes. Organisations whose employees experimented with Clawdbot on corporate systems may face notification obligations they are not yet aware of.
And there is a broader governance lesson: for organisations developing AI policies, the question is not whether to adopt agentic AI, but how, and the governance framework matters as much as the technology itself. "Self-hosted" and "local-first" do not automatically mean "secure," and even enterprise-grade platforms require rigorous oversight. As Gartner has predicted, over 40% of agentic AI projects will be cancelled by the end of 2027 due to escalating costs, unclear business value, or inadequate risk controls.
The Clawdbot saga is a preview of the regulatory and litigation challenges that agentic AI will bring. As these tools move from developer experiments to mainstream adoption, the legal profession would do well to pay attention.
Your thoughts?
Is your organisation evaluating agentic AI tools? How are you approaching the security and liability questions they raise? We would love to hear from you: info@deep-lex.com
Sources
- 'Clawdbot Chaos: A Forced Rebrand, Crypto Scam and 24-Hour Meltdown' Decrypt (January 2026) https://decrypt.co/356191/clawdbot-chaos-forced-rebrand-crypto-scam-24-hour-meltdown accessed 10 February 2026.
- 'Fake "ClawdBot" AI Token Hits $16M Before 90% Crash — Founder Warns of Scam' Yahoo Finance (January 2026) https://finance.yahoo.com/news/fake-clawdbot-ai-token-hits-121840801.html accessed 10 February 2026.'But Why is Clawdbot (Moltbot) Going Viral?
- ‘Claude with Hands' Medium (29 January 2026) https://medium.com/coding-nexus/but-why-is-clawdbot-moltbot-going-viral-claude-with-hands-79c72598ff89 accessed 10 February 2026.
- 'OpenClaw: The AI Butler With Its Claws On The Keys To Your Kingdom' BitSight (February 2026) https://www.bitsight.com/blog/openclaw-ai-security-risks-exposed-instances accessed 10 February 2026.
- 'Clawdbot's Rename to Moltbot Sparks Impersonation Campaign' Malwarebytes (January 2026) https://www.malwarebytes.com/blog/threat-intel/2026/01/clawdbots-rename-to-moltbot-sparks-impersonation-campaign accessed 10 February 2026.
- 'Infostealers Added Clawdbot to Their Target Lists Before Most Security Teams Knew It Was Running' VentureBeat (January 2026) https://venturebeat.com/security/clawdbot-exploits-48-hours-what-broke accessed 10 February 2026.
- 'Hundreds of Clawdbot Instances Were Exposed on the Internet' Towards AI (February 2026) https://pub.towardsai.net/hundreds-of-clawdbot-instances-were-exposed-on-the-internet-heres-how-to-not-be-one-of-them-63fa813e6625 accessed 10 February 2026.
- 'Critical AI Agent Flaws Exposed in Microsoft Copilot Studio and ServiceNow' WinBuzzer (4 February 2026) https://winbuzzer.com/2026/02/04/critical-ai-agent-flaws-exposed-in-microsoft-and-servicenow-xcxwbn/ accessed 10 February 2026.
- 'Microsoft Ignite 2025: Copilot and Agents Built to Power the Frontier Firm' Microsoft 365 Blog (18 November 2025) https://www.microsoft.com/en-us/microsoft-365/blog/2025/11/18/microsoft-ignite-2025-copilot-and-agents-built-to-power-the-frontier-firm/ accessed 10 February 2026.
- 'ServiceNow Advances Enterprise AI through Seamless Integrations with Microsoft' ServiceNow Newsroom (November 2025) https://newsroom.servicenow.com/press-releases/details/2025/ServiceNow-Advances-Enterprise-AI-through-Seamless-Integrations-with-Microsoft-Enabling-Collaboration-Orchestration-and-Governance/default.aspx accessed 10 February 2026.
- 'Enterprise AI Agents: Salesforce, ServiceNow, Microsoft 2026' Planetary Labour (January 2026) https://planetarylabour.com/articles/enterprise-ai-agents accessed 10 February 2026.
- 'Airia Adds AI Governance for Compliance, Accountability, and Control' Help Net Security (14 January 2026) https://www.helpnetsecurity.com/2026/01/14/airia-adds-ai-governance-for-compliance-accountability-and-control/accessed 10 February 2026.
