What OpenClaw actually is, what it does when you send a message, and why running it yourself matters.
Imagine a personal AI assistant that lives on your computer — not in some company's cloud. You message it on WhatsApp, it answers on WhatsApp. You message it on Slack, it answers on Slack. You pick the AI brain it uses. That is OpenClaw.
OpenClaw installs and runs on your own Mac, Linux machine, or server. There is no mandatory third-party company in the middle holding your conversations. When you are not connected to the internet, your Gateway still runs locally.
OpenClaw connects to 20+ platforms — WhatsApp, Telegram, Discord, Slack, Signal, iMessage, Matrix, and many more — through small pieces of software called channel plugins. You don't have to learn a new app.
OpenClaw does not lock you into one LLM. You configure it to use OpenAI, Anthropic, Google, or even a local model running on your own computer. Swap providers any time by editing a config file.
The AI can be given tools — like the ability to search the web, run code, or read files. You decide which tools are available. Nothing runs without your explicit configuration.
By default, the Gateway only listens on your machine's loopback address (127.0.0.1). That means messages between your apps and your AI travel only inside your computer — never across the internet. Your conversations stay yours.
A lot happens in under a second. Here is every step, from your thumb on the screen to the reply appearing in your chat.
You write a message in your normal messaging app — WhatsApp, Slack, Telegram, whatever you set up. You hit send exactly the same way you would with any human contact.
WhatsApp's servers (or Discord's, or Telegram's) receive the message and route it to all the devices listening on that account — including the OpenClaw channel plugin running on your computer.
The plugin for that messaging app picks up the message. It converts it from the platform's own format (WhatsApp uses one format, Discord uses another) into OpenClaw's internal format — a common language everyone understands.
The plugin sends the translated message to the Gateway over a WebSocket connection. The Gateway reads your configuration and decides which AI agent should handle this message.
The Agent receives the message, calls the API of your chosen LLM provider, and waits for a response. If the AI needs to use a tool (like searching the web), that happens here too.
The Agent sends the reply back through the Gateway, which forwards it to the correct channel plugin, which sends it back through WhatsApp's (or Discord's, or Slack's) servers, and it appears in your conversation as a normal message.
Steps 1–4 and 6 take milliseconds. The majority of wait time is step 5: how long the LLM takes to generate a response. If replies feel slow, the AI provider is almost always the bottleneck, not OpenClaw itself.
Let's trace "Hey OpenClaw, summarize my emails" step by step through every layer of the system.
You open WhatsApp on your phone and send "Hey OpenClaw, summarize my emails" to a WhatsApp contact you set up as your bot number. WhatsApp's servers receive it and push it to all connected devices.
The WhatsApp channel plugin running on your computer sees a new message arrive. It reads the sender ID (your phone number), the chat ID, and the text. It wraps this data into a standard OpenClaw event — the same shape every plugin produces, regardless of platform.
The plugin sends the event to the Gateway at ws://127.0.0.1:18789. The Gateway checks your config file to find which Agent is bound to this WhatsApp account and hands the message off to it.
The Router reads your ~/.openclaw/openclaw.json config. It sees that messages from your WhatsApp account should go to your "personal assistant" agent. This is how you could have one agent for work Slack and a different one for personal WhatsApp.
The Agent loads your system prompt and conversation history, appends your new message, and sends the full context to your chosen LLM via its API. If you configured an email tool, the Agent might call that tool first to fetch your actual emails before asking the LLM to summarize them.
The LLM returns a summary. The Agent sends it to the Gateway. The Gateway delivers it to the WhatsApp plugin. The plugin calls the WhatsApp API to send the message back to your phone. You see "Here are your 5 new emails..." appear in the chat — as if you texted a very knowledgeable friend.
Notice how each layer has exactly one job: plugins translate, the Gateway routes, the Router decides, the Agent thinks. This separation means you can swap out any one piece — change the AI model, add a new messaging app, or update a plugin — without touching the others.
Three scenarios based on what you just learned. Pick the best answer for each.
You want OpenClaw to also respond when people message you on Telegram. Right now it only works on WhatsApp. What do you need to add?
Your AI responses have been getting much slower lately. Based on how OpenClaw works, where would you look first?
A friend builds a similar tool but runs their Gateway on a public cloud server so they can access it from anywhere. What is the main privacy difference from OpenClaw's default setup?
Every message involves four distinct pieces of software. Here is who they are, what they do, and why they exist as separate things.
OpenClaw is not one monolithic program — it is four cooperating players, each with a clear role. Understanding who does what is the key to understanding everything else.
A background daemon that acts as the central hub for all traffic. It listens at ws://127.0.0.1:18789 using the WebSocket protocol. Every plugin and every app connects here. Nothing talks to anything else directly.
One plugin per messaging platform. They live in the extensions/ folder of the codebase. Each plugin knows the peculiarities of its platform (Discord's slash commands, WhatsApp's phone-number-based IDs, Telegram's bot tokens) and translates incoming messages into OpenClaw's universal format.
Lives in src/routing/resolve-route.ts. When the Gateway receives a message, the Router reads your config's binding rules and determines which Agent should handle it. This is what lets you have a "work agent" on Slack and a "personal agent" on WhatsApp simultaneously.
Runs the LLM, maintains conversation history, executes tools, and produces the final response. Powered by the Pi runtime. Each Agent has its own system prompt, memory settings, and tool permissions. You can have multiple Agents configured at once.
Separation of concerns. If WhatsApp changes their API, only the WhatsApp plugin needs updating — the Gateway, Router, and Agent stay untouched. If you want to try a new AI model, you swap the Agent's LLM config — no plugins need changing. This is good software architecture.
The Gateway is the heart of OpenClaw. It runs as a background process and holds every active connection open simultaneously. Here is the actual TypeScript type that defines how any program connects to it.
export type GatewayClientOptions = {
url?: string; // ws://127.0.0.1:18789
connectDelayMs?: number;
token?: string;
bootstrapToken?: string;
password?: string;
clientName?: GatewayClientName;
mode?: GatewayClientMode;
scopes?: string[];
};
url — The address where the Gateway is listening. Defaults to localhost:18789 — your own machine, not the internet.
connectDelayMs — How many milliseconds to wait before connecting. Prevents apps from hammering the Gateway if they restart rapidly.
token — A secret string that proves you are allowed to connect. Like a password, but automatically generated.
bootstrapToken — A temporary one-time token used during first-time setup, before a permanent token is issued.
password — An alternative to a token: connect using a plain password instead.
clientName — Identifies which kind of app is connecting: the macOS menu-bar app, a channel plugin, the CLI, the web UI, etc.
mode — Whether this client gets a full connection or a limited, restricted one.
scopes — A list of what this client is actually allowed to do once connected. Like permissions on a phone app.
In TypeScript, a ? after a property name means it is optional — the program will use a sensible default if you don't supply it. That is why you do not need to configure every option to get OpenClaw running: most defaults just work.
Every single program that connects to OpenClaw — the macOS menubar app, channel plugins, the web UI, your phone apps — uses this same GatewayClientOptions shape to establish its connection. The Gateway is the one place everything converges.
Discord speaks differently from WhatsApp. Telegram's IDs look nothing like Slack's. Every messaging platform has its own API, its own authentication system, its own message format. Channel plugins are the adapters that make everything speak the same language.
Every channel plugin is built using a shared SDK — the plugin-sdk — so they all work the same way from the Gateway's perspective. Here is the real code that wires up one plugin:
import { createChatChannelPlugin }
from "openclaw/plugin-sdk/core";
export const bluebubblesPlugin =
createChatChannelPlugin({
base: {
id: "bluebubbles",
capabilities: {
chatTypes: ["direct", "group"],
media: true,
reactions: true,
reply: true,
},
},
// ... messaging, config, actions
});
import { createChatChannelPlugin } — Pull in the standard blueprint from OpenClaw's SDK. Every chat-style plugin starts here — this one call sets up the skeleton.
createChatChannelPlugin({...}) — Fill in the blueprint with specifics. Like filling out a form: you provide your plugin's unique details, the SDK handles the common boilerplate.
id: "bluebubbles" — The unique internal name for this plugin. Used in config files and logs to identify which plugin is which.
capabilities: { chatTypes, media, reactions, reply } — What this messaging platform can actually do. Does it support group chats? Can it send images? React to messages? The SDK uses this to enable or disable features automatically.
The pattern repeats — Discord, WhatsApp, Telegram, Slack — they all call createChatChannelPlugin with their own specifics. Same skeleton, different platform details.
Each plugin in extensions/ is a standard Node.js package. When you install OpenClaw, the plugins you need are installed alongside it. When someone builds a new plugin for a new messaging app, they publish it as an NPM package and you install it like any other software.
Three scenarios applying what you just learned about the four players.
You open the OpenClaw macOS menubar app while the Gateway is already running in the background. How does the app know what messages are coming in?
Someone sends you the exact same question on both Discord and Slack at the same moment. What happens?
A new messaging app called "Zync" launches and gets popular. Nobody has built OpenClaw support for it yet. You want to add it yourself. What would you need to create?
When you mail a letter, the post office doesn't care if it's a birthday card or a bill — it just routes it to the right address. OpenClaw's Gateway works the same way: it receives a message "envelope", checks who should handle it, and delivers it. Every message — whether it comes from Discord, WhatsApp, or the CLI — takes the exact same highway.
The Router is like a GPS navigation system. It takes the incoming message details — which channel did it come from? who sent it? — and checks the config "map" to find the best route to an Agent.
export function resolveAgentRoute(input: ResolveAgentRouteInput): ResolvedAgentRoute {
const channel = normalizeToken(input.channel);
const accountId = normalizeAccountId(input.accountId);
const dmScope = input.cfg.session?.dmScope ?? "main";
const bindings = getEvaluatedBindingsForChannelAccount(
input.cfg, channel, accountId
);
const choose = (agentId: string, matchedBy: string) => {
const sessionKey = buildAgentSessionKey({
agentId, channel, accountId, peer, dmScope,
}).toLowerCase();
return { agentId, sessionKey, matchedBy };A function that figures out which AI agent should handle an incoming message
Normalize the channel name (e.g. 'discord' → 'discord')
Normalize the account ID of who sent the message
Look up the session scope from config — 'main' means all DMs share one conversation
Get the routing rules (bindings) that apply to this channel and account
A helper: given an agent ID and a match reason, build the full route result
Build the session key — this is the unique ID for this conversation thread
The ~/.openclaw/openclaw.json
file is like a switchboard operator's instruction card. It tells the Router:
"If someone messages from Discord with this username, send it to this agent."
Every decision OpenClaw makes about routing starts here.
{
"agent": { "model": "anthropic/claude-opus-4-6" },
"channels": {
"discord": {
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["yourname"]
}
},
"gateway": {
"bind": "loopback",
"port": 18789
}
}The top of your config file
Which AI brain to use — Anthropic's Claude Opus model
Channel settings — configure each messaging platform here
Discord-specific config
Your Discord bot's secret password — the API token that lets OpenClaw control your bot (never share this)
Only respond to messages from 'yourname' — everyone else is ignored
Gateway settings — controls the internal WebSocket server
Only accept connections from this same machine — loopback = localhost only, no internet exposure
Listen on port 18789
~/.openclaw/openclaw.json in any text editor
allowFrom
for Telegram, but messages from your friend still get responses. Why?
OpenClaw ships with 25+
channel plugins
out of the box — Discord, WhatsApp, Telegram, Slack, Signal, iMessage, Matrix, and more.
Each plugin lives in the extensions/ folder
as its own self-contained package. Think of them like apps on your phone:
OpenClaw is the
OS,
plugins are the apps.
extensions/ is an independent
npm package.
You can update, disable, or replace any plugin without touching any other part of OpenClaw.
Every channel plugin follows the same blueprint using
createChatChannelPlugin
from the
Plugin SDK.
It's like a standardized shipping container — the shape is always the same, but the contents differ. OpenClaw's core knows how to handle any container; it doesn't need to know what's inside.
import { createChatChannelPlugin } from "openclaw/plugin-sdk/core";
createChatChannelPlugin({
pairing: {
idLabel: "whatsappSenderId",
},
outbound: {
resolveTarget: ({ to, allowFrom, mode }) =>
resolveWhatsAppOutboundTarget({ to, allowFrom, mode }),
},
});Import the standard blueprint all channel plugins use
Create a new channel plugin following the blueprint
Configuration for message pairing (how to identify senders)
Label the sender ID field 'whatsappSenderId' in the UI
Configuration for sending outgoing messages
Given a recipient, allowed senders list, and mode — figure out the actual WhatsApp number to send to
Not just messaging — LLM providers
are also plugins! Anthropic, OpenAI, Google, Mistral, Groq... each lives in extensions/.
This means you can swap AI brains by changing one line in your config.
The Agent doesn't know or care which
LLM
it's using — it just asks the plugin to "think about this" and waits for an answer.
Claude Opus, Sonnet, Haiku — most capable reasoning, lowest prompt-injection risk. Best for complex tasks requiring careful judgment.
GPT models — widely used, large context windows, huge ecosystem of tools and integrations built around them.
Multimodal from the ground up — handles images, audio, and video natively. Great when you need to send screenshots or photos to your assistant.
Ultra-fast inference using specialized hardware. Ideal for low-latency voice responses where waiting 3 seconds feels like an eternity.
"agent": { "model": "openai/gpt-4o" }
The rest of OpenClaw — routing, sessions, tools, memory — stays exactly the same. Only the AI brain changes.
OpenClaw is designed so that your AI assistant is genuinely yours — not a service you're borrowing from someone else's infrastructure.
Most cloud AI assistants work like a megaphone in a public square — every message you send travels across the internet to a company's server, gets processed, and comes back. Many hands touch your data in transit. OpenClaw's default approach is a drawbridge: closed to the outside world by default. Only you can cross it.
The key config that makes this possible is "bind": "loopback". When the Gateway binds to the loopback interface, it literally cannot receive connections from any other device on your network, let alone the internet.
Even once you're inside the local network, OpenClaw layers three independent access controls. You have to pass all three to get a response from the AI.
export type GatewayClientOptions = {
url?: string; // ws://127.0.0.1:18789
token?: string;
clientName?: GatewayClientName;
mode?: GatewayClientMode;
scopes?: string[];
};
OpenClaw's "secret weapon" is that all of its user interfaces are thin clients — they contain no intelligence of their own. Every app, every CLI command, every channel plugin is just a thin client pointing at the same Gateway. The Gateway is the source of truth. Whether you're on iOS, Android, macOS, or the web — you're looking at different windows into the same room.
export type GatewayClientOptions = {
// ...
clientName?: GatewayClientName;
mode?: GatewayClientMode;
// ...
};
Three scenarios. Think through what you've learned about OpenClaw's security model.
Zoom out. See every component, how they connect, and how to navigate the system when something goes wrong.
Every concept from this course in one diagram. This is the complete OpenClaw architecture — all the zones, all the connections, all the data flows.
OpenClaw is actually multiple Node.js processes working together. Think of it like a restaurant kitchen: the main process (front of house) takes orders and passes them to worker processes (kitchen stations) that handle the actual work.
const child = spawn(process.execPath, plan.argv, {
stdio: "inherit",
env: plan.env,
});
attachChildProcessBridge(child);
process.execPath = the Node binary)
The openclaw doctor command is your first stop when anything breaks — like a system diagnostic check that inspects every component and reports what it finds.
openclaw doctorallowFrom list contains your user IDss -ltnp | grep 18789 or lsof -i :18789openclaw doctor); check the auth token in the app's settings matches the Gateway's config~/.openclaw/ — a directory that is only readable by your own user account.
Four questions that span the whole course. These are a little harder — they test your understanding of how the pieces interact.