Moltbot (ex Clawdbot): Autonomous AI Agents, Revolution or Digital Time Bomb?

I tested Moltbot, the autonomous AI agent that takes control of your PC. I installed it, tested it and uninstalled it immediately. Here's why autonomy without security is digital Russian roulette and what you really need before trusting it.

Gaetano Castaldo Gaetano Castaldo
11 Feb 2026
AI Cybersecurity Digital Transformation #autonomous AI agents #Moltbot #Clawdbot #AI security #prompt injection #AI agent #cybersecurity AI #AI autonomy #black box AI #AI governance #sandbox AI #browser agent #information security #AI risks
Illustration of a surprised man in front of a laptop while an autonomous AI agent takes control of the computer with glowing digital tentacles

TL;DR

  • Moltbot (ex Clawdbot) isn't a chat bot: it's an autonomous agent that reactivates itself, decides when to act and has direct access to your PC.
  • I tested it and uninstalled immediately: too much autonomy, zero serious security layer.
  • Real risks: hallucinations that become real actions, hidden prompt injections on the web, access to logged sessions and saved cards.
  • The future of agents isn't "access to your PC" but segregated environments, temporary tokens and approvals for sensitive actions.
  • Until governance and robust limits are missing, agents like Moltbot should be treated as exploration, not infrastructure.

I started experimenting with Clawdbot, now known as Moltbot. My goal was simple (and I think it should be the same for anyone approaching these agents): understand how much we can trust software that acts on our behalf.

I tested it and soon after uninstalled it. Now I'll explain why, without "terrorism", but with some healthy digital survival instinct.

What is Moltbot and why it's not your typical bot

The point is that Moltbot isn't your typical chat bot, nor your typical "agent" that lives within a controlled perimeter. Here we're facing a concept taken to its logical conclusion: the idea of an agent that doesn't wait for you, but activates itself.

Until now most bots were interesting, yes, but remained reactive. You had to write something to it, give it input, launch a command, then it would act. Done.

With Moltbot instead there's a logic of "continuous awakening": a cron job that reactivates it when a task ends or on a schedule. And the real coup, the one that makes a scene and is conceptually brilliant, is that it decides itself when to wake up and why: on event or on time.

It seems like a technical detail, but it changes everything. Because the moment an agent decides to reactivate itself is also the moment you're delegating autonomy. And autonomy in computing means operational power.

The problem: direct access to your PC, no sandbox

Then there's the other piece, the more delicate one: the connection with the outside world.

Moltbot's ability to interact with your environment is "unparalleled" precisely because, in fact, you're handing it your PC. Not an API. Not a mediated layer. Not a sandbox that limits damage. Your PC.

It's like saying: "go ahead, you're smart enough".

Except here a theme comes in that many ignore until they hit it head-on: an AI model's intelligence doesn't equal its reliability.

Hallucinations and prompt injection: when risk becomes real

Bots are already insecure by nature, even when they don't have access to the operating system. Because they transmit bias, because they hallucinate, because they interpret badly.

And as long as this stays in text, it's annoying but manageable: you correct, retry, change the prompt. But if the bot is an agent that navigates for you, fills forms, clicks buttons, buys things, and maybe has access to already-logged sessions and saved cards... then hallucination stops being a "nerd problem" and becomes a business and legal problem.

Concrete example: the agent that buys on your behalf

I'll give you a brutal example because it makes the point clear: imagine a "hallucinating" agent that misreads a page and concludes it needs to buy something. With your data. With your card.

Or imagine an agent that, while browsing, encounters a hidden prompt injection on a web page. You don't even need a "hacker" site in movie style. Just malicious content hidden somewhere unexpected: a comment, invisible text, an HTML block with instructions designed to fool the model.

And this stuff isn't theory anymore. Prompt injections are already on the web and becoming increasingly common precisely because now it pays: if you can manipulate the agent, you manipulate the action. It's a huge upgrade for attackers.

The missing security: need rules, not hopes

The problem is we don't have a serious, deep way to regulate the security of these agents. And I mean "regulate" in the true sense:

  • Granular authorizations
  • Verifiable policies
  • Robust limits
  • Complete audit trail
  • Anti-fraud controls
  • Blocks on certain action categories
  • Monitoring and kill switches that actually work

Not a "just be careful". Because "be careful" isn't a security strategy: it's hope. It's the same principle that applies when talking about AI without governance: without clear rules, risk is proportional to power.

Goodbye CAPTCHA: when the agent "becomes you"

There's another aspect that made me raise an eyebrow: these agents today don't limit themselves to text. With screen capture they can interpret what they see and act.

Translated: even old "prove you are human" (CAPTCHA and similar) start losing meaning, because if the agent sees the screen and understands the context, it can often manage.

And when an agent can simulate you well in the browser, you might not notice anything until it's too late. Because it's using your sessions, your cookies, your access. It's "being you", without being you.

Why the "serious" market doesn't adopt these agents (yet)

At this point someone might say: "Ok, but why then? Why does it exist?"

And that's where the part many underestimate comes in: if innovations like this don't explode immediately massively in the "serious" market, it's not because they're not cool. It's because software engineers do their job: they study vulnerabilities, look for holes, build mitigations.

The problem is that a certain trend seems to be taking over common sense. There's anxiety to run, to be first, to make the LinkedIn video with "I automated everything", and meanwhile we're giving huge privileges to systems that, technically, are still experimental.

The model is a black box: Geoffrey Hinton's warning

And there's an even deeper point, one that makes me say "slow down" even when I'm the first to push AI: the model is a black box.

We don't really know what happens inside. We can describe the training algorithm, we can observe patterns, we can do red teaming, but we can't open the system's head and say "this variable here, if stimulated this way, will throw it off track". We don't have that level of control.

Geoffrey Hinton says essentially the same: we've understood the algorithm they learn by, but we haven't really understood how they do it.

And I read it this way: if you don't understand the "how", you must compensate with governance and limits. If instead you remove limits and hope it works... that's Russian roulette.

As we saw talking about AI acceleration with governance: speed without control isn't innovation, it's risk.

Why I uninstalled it (and what it says about the future)

So yes: I tested it and uninstalled it. Not because it doesn't work, but because it works "too much" relative to how ready we are to manage it.

And that's the paradoxical part: power isn't the main problem. The problem is the combination of power, autonomy and lack of a serious security layer.

The future of AI agents: powerful inside a cage

My opinion, a bit outside the box, is this: the future won't be agents that have "access to your PC". The future will be agents that have access to a "fake PC", meaning:

  • Segregated environments (dedicated sandboxes)
  • Temporary tokens with expiration
  • Granular permissions with expiration
  • Credential vaults
  • Approvals for sensitive actions

In practice: the agent mustn't be free. It must be powerful inside a cage.

Until we get there, Moltbot and company should be treated as exploration, not infrastructure. Advanced toy, not an employee hired with banking authority.


Good luck to us, seriously. Because the direction is inevitable. But the difference between evolution and disaster will be discipline, not hype.

Tags

#autonomous AI agents #Moltbot #Clawdbot #AI security #prompt injection #AI agent #cybersecurity AI #AI autonomy #black box AI #AI governance #sandbox AI #browser agent #information security #AI risks
Gaetano Castaldo
Gaetano Castaldo Sole 24 Ore

Founder & CEO · Castaldo Solutions

Consulente di trasformazione digitale con esperienza enterprise. Aiuto le PMI italiane ad adottare AI, CRM e architetture IT con risultati misurabili in 90 giorni.

Read also

Related articles you might find interesting

Modern data centers powered by solar panels with symbols of artificial intelligence, representing the energy revolution of AI

The AI race ignites the new energy revolution

Between data centers consuming tens of gigawatts and MIT's new ultra-thin solar cells, AI is forcing the system into an energy upgrade like never before. The challenges don't stop us: they force us to evolve.

20 Jan 2025 Read →

Want to learn more?

Contact us for a free consultation on your company's digital transformation.

Contact Us Now