Introduction: The Evolution of Personal AI
Late January 2026 will be remembered in the open-source community as the week of the “Great Migration.” Following a swift trademark dispute with Anthropic regarding the name “Claude,” Peter Steinberger’s viral AI project, Clawdbot, has officially shed its old shell.
Enter Moltbot.
While the name has changed—a nod to the “molting” process of a lobster—the core promise remains revolutionary. Unlike ChatGPT or Gemini, which exist as remote services in a browser tab, Moltbot is Agentic AI. It lives on your machine, owns its data, and crucially, has the agency to perform tasks, not just answer questions.
Whether you are a developer wanting a coding companion or a productivity hacker looking to automate your digital life, this guide will walk you through setting up, securing, and mastering Moltbot.

Chapter 1: What is Moltbot and Why Does It Matter?
Before we open the terminal, it is vital to understand the architectural philosophy that makes Moltbot different from standard LLMs (Large Language Models).
1. Local Sovereignty
Moltbot runs as a Node.js application on your hardware (Mac, Linux, or Windows via WSL).
- Privacy: Your long-term memory (Vector Database) is stored locally.
- Agnosticism: You are not tied to one provider. You can power Moltbot with Anthropic’s Claude 3.5, OpenAI’s GPT-5, or Google’s Gemini 1.5 Pro via API.
2. The “Agent” Difference
Standard AI is passive; it waits for input. Moltbot is active.
- Tools: It has access to a file system, a web browser (via Puppeteer), and your calendar.
- Gateways: It lives where you communicate. You don’t log into a Moltbot app; you text it on Telegram, WhatsApp, or Signal as if it were a contact in your phone.
3. The “Molting” Concept (Self-Improvement)
The rebrand isn’t just cosmetic. The new v2.0 architecture emphasizes “iterative refinement.” When Moltbot writes code or drafts an email, it runs a self-check loop (a critique phase) to refine the output before sending it to you.
Chapter 2: Critical Security Warnings
Because Moltbot acts as an agent with permissions to execute commands and read files, security is paramount.
⚠️ SECURITY ALERT:
- Official Source Only: Due to the viral rebranding on Jan 27-28, 2026, scam sites selling “Molt Tokens” have appeared. Moltbot is free, open-source software. Only download from the official GitHub repository.
- Sandboxing: It is highly recommended to run Moltbot inside a Docker Container. Giving an AI root access to your raw file system is risky.
- API Costs: While the software is free, the intelligence is not. You will need a paid API key from an LLM provider (e.g., Anthropic or OpenAI). Monitor your usage limits.
Chapter 3: Installation & Environment Setup
Let’s get your digital assistant running. We will assume a macOS or Linux environment for this guide.
Prerequisites
- Node.js: Version 22 (LTS) or higher.
- Git: To clone the repository.
- API Key: An active key from Anthropic (recommended for best coding performance) or OpenAI.
Step 1: Clone the Repository
Open your terminal. Note the updated repository name following the rebrand.
Bash
git clone https://github.com/moltbot/moltbot.git
cd moltbot
npm install
Step 2: Configuration (.env)
Moltbot uses environment variables to manage secrets. Do not skip this step.
Bash
cp .env.example .env
Open the .env file in your text editor (VS Code, Nano, or Vim). You need to configure the “Brain” of the bot:
# AI Provider Configuration
LLM_PROVIDER="anthropic" # Options: anthropic, openai, google, ollama
ANTHROPIC_API_KEY="sk-ant-..."
# Bot Personality
BOT_NAME="Moltbot"
SYSTEM_PROMPT="You are a helpful, witty AI assistant who prefers concise answers."
# Persistence (Memory)
VECTOR_DB_PATH="./data/memory"
Chapter 4: Setting Up Gateways (Telegram Integration)
The magic of Moltbot is accessing it from your phone. We will set up the Telegram Gateway, as it offers the best balance of security and features.
1. Create a Telegram Bot
- Open Telegram and search for @BotFather.
- Send the command
/newbot. - Name it (e.g., “MyPersonalMoltbot”).
- BotFather will give you an HTTP API Token. Copy this.
2. Connect Moltbot
Go back to your .env file and find the Gateway section:
# Telegram Gateway
TELEGRAM_ENABLED=true
TELEGRAM_BOT_TOKEN="123456:ABC-DEF..."
TELEGRAM_ALLOWED_USERS="987654321" # CRITICAL: Put your User ID here!
Note: Use @userinfobot on Telegram to find your numeric User ID. If you leave this blank, anyone who finds your bot can use your API credits.
3. Launch
Return to your terminal and start the engine:
Bash
npm run start
You should see: [Moltbot] System Online. Listening on Telegram Gateway...
Chapter 5: Using Moltbot – Skills and Workflows
Open Telegram and send “Hello” to your new bot. Now, let’s explore what it can actually do.
1. The “Browse” Skill
Moltbot can browse the live web, unlike ChatGPT’s training data cut-off.
- Command: “Check the Reddit thread about the ‘Moltbot rebrand’ and summarize the community sentiment for me.”
- Action: It spins up a headless browser, scrapes the text, synthesizes it, and sends you a digest.
2. The “File” Skill (Local RAG)
You can drop PDF or Text files into the ./data/documents folder on your computer.
- Command: “Read the
budget_2026.csvfile I just saved and calculate my total software subscription costs.” - Action: Moltbot analyzes the local file securely without uploading it to a third-party cloud (except the LLM inference window).
3. Creating Custom Skills (The Power User Move)
Moltbot is extensible. You can write simple TypeScript functions in the src/skills directory.
Example: A “Crypto Watcher” Skill
TypeScript
// src/skills/crypto.ts
import { Skill } from '../core/skill';
export const checkPrice = new Skill({
name: 'checkCrypto',
description: 'Checks current price of a coin',
execute: async (coin: string) => {
// Fetch data from an API and return text
const data = await fetch(`https://api.coingecko.com...`);
return `The price of ${coin} is...`;
}
});
Once saved, you can simply ask Moltbot: “What’s the price of Bitcoin?” and it will know which code to execute.
Chapter 6: Optimization & Troubleshooting
1. Reducing Hallucinations
If Moltbot starts making things up, your “Temperature” setting might be too high.
- Fix: In
.env, setLLM_TEMPERATURE=0.2. This makes the bot more factual and less creative/random.
2. Context Window Limits
If you chat for too long, the bot might “forget” the beginning of the conversation.
- Fix: Type
/summarize(a built-in command). Moltbot will compress the current conversation into its long-term memory and clear the active context window, saving you tokens and money.
3. Running Locally (Free & Private)
If you have a powerful GPU (NVIDIA RTX 4090 or Mac M3/M4), you can ditch the API costs.
- Install Ollama.
- Pull a model:
ollama pull llama3. - Update
.env:程式碼片段LLM_PROVIDER="ollama" OLLAMA_MODEL="llama3"
Now your Moltbot is 100% offline and free to run.
Conclusion: The Future is Agentic
The transition from Clawdbot to Moltbot is more than just a name change; it marks the maturity of open-source personal AI. By setting up Moltbot, you are reclaiming your digital agency. You are building a system that doesn’t just talk, but works for you.
Whether you use it to organize your calendar, debug your code, or simply as a private sounding board, Moltbot represents a shift towards “Personal AI Sovereignty.”
Next Step: Now that your Moltbot is live, try setting up a “Daily Briefing” automation. Ask it to scan your calendar and weather forecast every morning at 8:00 AM and send you a prepared summary on Telegram.
FAQ
Q: Is Moltbot related to the VEX Robotics robot?
A: No. While they previously shared a name (leading to confusion), this Moltbot is a software AI agent created by Peter Steinberger. The VEX robot is a physical educational hardware kit.
Q: Can I run this on a Raspberry Pi?
A: Yes, Moltbot runs well on a Raspberry Pi 5. However, if you use a Local LLM (Ollama), the Pi will be too slow. We recommend using the Pi as the server, but connecting it to an API (Anthropic/OpenAI) for the heavy lifting.
Q: Is my data safe?
A: Your data resides on your machine. However, if you use an external API (like Claude or GPT), snippets of your conversation are sent to that provider for processing. For 100% privacy, you must use a local model like Llama 3 via Ollama.
