Home
News
Tech Grid
Interviews
Anecdotes
Think Stack
Press Releases
Articles
  • AI

Intruder Exposes Widespread Security Risks in Moltbot AI Assistants


Intruder Exposes Widespread Security Risks in Moltbot AI Assistants
  • by: Source Logo
  • |
  • February 4, 2026

Intruder has released security research revealing widespread data exposure risks in deployments of Moltbot (formerly Clawdbot), an open-source, self-hosted AI assistant. The analysis finds that the platform's emphasis on ease-of-use has led to insecure default configurations, resulting in exposed credentials, prompt injection attacks, malicious plugins, and active exploitation by threat actors.

Quick Intel

  • Intruder research exposes critical security risks in Moltbot (Clawdbot) AI assistants.

  • Misconfigured cloud instances lead to exposed API keys and credentials.

  • The platform lacks fundamental security guardrails and safe defaults.

  • Attackers are exploiting these flaws via prompt injection and malicious plugins.

  • Unauthorized AI actions, including data exfiltration, have been observed.

  • Organizations running Moltbot are urged to assume compromise and act immediately.

The Security Cost of Simplified AI Deployment

Moltbot is designed for rapid, simplified deployment as a self-hosted AI assistant, often integrated with email, social media, and cloud services. However, Intruder's research indicates that this emphasis on ease-of-use has come at the cost of security. The platform does not enforce secure-by-default settings such as firewalls, credential validation, or plugin sandboxing, creating a significant and unintended attack surface that is actively being exploited.

Key Vulnerabilities and Active Exploitation

The research details several critical issues: publicly accessible configuration files leaking credentials; prompt injection attacks that cause the AI to leak private data from connected platforms; and the distribution of backdoored third-party plugins that harvest credentials or recruit instances into botnets. These vulnerabilities are not theoretical—Intruder observed real-world exploitation leading to credential theft and unauthorized automated actions by the AI.

Immediate Recommendations for Affected Organizations

Intruder's advisory is urgent. Organizations that have deployed Moltbot, especially with default settings, should assume they are compromised. Immediate steps include disconnecting all third-party integrations, rotating any potentially exposed credentials, implementing strict firewall rules and IP allowlists, removing and auditing third-party plugins, and thoroughly reviewing system logs for signs of unauthorized activity.

This research underscores the critical need for security to be a foundational component of AI tooling, not an afterthought. As AI assistants gain access to sensitive systems and data, ensuring they are deployed with robust guardrails and configurations is essential to prevent them from becoming a vector for significant data breaches.

About Intruder

Intruder’s exposure management platform helps lean security teams stop breaches before they start by proactively discovering attack surface weaknesses. By unifying attack surface management, cloud security and continuous vulnerability management in one intuitive platform, Intruder makes it easy to stay secure by cutting through the noise and complexity. Founded in 2015 by Chris Wallis, a former ethical hacker turned corporate blue teamer, Intruder is now protecting over 3,000 companies worldwide.

  • CybersecurityAIData ExposureThreat Intelligence
News Disclaimer
  • Share