01

Your Personal AI, Explained

What OpenClaw actually is, what it does when you send a message, and why running it yourself matters.

What is OpenClaw?

Imagine a personal AI assistant that lives on your computer — not in some company's cloud. You message it on WhatsApp, it answers on WhatsApp. You message it on Slack, it answers on Slack. You pick the AI brain it uses. That is OpenClaw.

💻
Runs on your device

OpenClaw installs and runs on your own Mac, Linux machine, or server. There is no mandatory third-party company in the middle holding your conversations. When you are not connected to the internet, your Gateway still runs locally.

💬
Works in any messaging app you already use

OpenClaw connects to 20+ platforms — WhatsApp, Telegram, Discord, Slack, Signal, iMessage, Matrix, and many more — through small pieces of software called channel plugins. You don't have to learn a new app.

🧠
Powered by any AI model you choose

OpenClaw does not lock you into one LLM. You configure it to use OpenAI, Anthropic, Google, or even a local model running on your own computer. Swap providers any time by editing a config file.

🛠
You control the tools it has

The AI can be given tools — like the ability to search the web, run code, or read files. You decide which tools are available. Nothing runs without your explicit configuration.

🔒
The "Local-First" Insight

By default, the Gateway only listens on your machine's loopback address (127.0.0.1). That means messages between your apps and your AI travel only inside your computer — never across the internet. Your conversations stay yours.

What happens when you send a message?

A lot happens in under a second. Here is every step, from your thumb on the screen to the reply appearing in your chat.

1
You type and send

You write a message in your normal messaging app — WhatsApp, Slack, Telegram, whatever you set up. You hit send exactly the same way you would with any human contact.

2
The messaging platform delivers it

WhatsApp's servers (or Discord's, or Telegram's) receive the message and route it to all the devices listening on that account — including the OpenClaw channel plugin running on your computer.

3
The Channel Plugin receives and translates it

The plugin for that messaging app picks up the message. It converts it from the platform's own format (WhatsApp uses one format, Discord uses another) into OpenClaw's internal format — a common language everyone understands.

4
The Gateway routes it

The plugin sends the translated message to the Gateway over a WebSocket connection. The Gateway reads your configuration and decides which AI agent should handle this message.

5
The Agent thinks

The Agent receives the message, calls the API of your chosen LLM provider, and waits for a response. If the AI needs to use a tool (like searching the web), that happens here too.

6
The response travels back

The Agent sends the reply back through the Gateway, which forwards it to the correct channel plugin, which sends it back through WhatsApp's (or Discord's, or Slack's) servers, and it appears in your conversation as a normal message.

Speed note

Steps 1–4 and 6 take milliseconds. The majority of wait time is step 5: how long the LLM takes to generate a response. If replies feel slow, the AI provider is almost always the bottleneck, not OpenClaw itself.

A concrete example

Let's trace "Hey OpenClaw, summarize my emails" step by step through every layer of the system.

You (WhatsApp) You type the message

You open WhatsApp on your phone and send "Hey OpenClaw, summarize my emails" to a WhatsApp contact you set up as your bot number. WhatsApp's servers receive it and push it to all connected devices.

WhatsApp Plugin Plugin detects the inbound message

The WhatsApp channel plugin running on your computer sees a new message arrive. It reads the sender ID (your phone number), the chat ID, and the text. It wraps this data into a standard OpenClaw event — the same shape every plugin produces, regardless of platform.

Gateway Gateway receives and dispatches

The plugin sends the event to the Gateway at ws://127.0.0.1:18789. The Gateway checks your config file to find which Agent is bound to this WhatsApp account and hands the message off to it.

Router Router applies your binding rules

The Router reads your ~/.openclaw/openclaw.json config. It sees that messages from your WhatsApp account should go to your "personal assistant" agent. This is how you could have one agent for work Slack and a different one for personal WhatsApp.

Agent + LLM Agent builds the prompt and calls the AI

The Agent loads your system prompt and conversation history, appends your new message, and sends the full context to your chosen LLM via its API. If you configured an email tool, the Agent might call that tool first to fetch your actual emails before asking the LLM to summarize them.

Back to you Response flows back through every layer

The LLM returns a summary. The Agent sends it to the Gateway. The Gateway delivers it to the WhatsApp plugin. The plugin calls the WhatsApp API to send the message back to your phone. You see "Here are your 5 new emails..." appear in the chat — as if you texted a very knowledgeable friend.

💡
Key Insight

Notice how each layer has exactly one job: plugins translate, the Gateway routes, the Router decides, the Agent thinks. This separation means you can swap out any one piece — change the AI model, add a new messaging app, or update a plugin — without touching the others.

Check your understanding

Three scenarios based on what you just learned. Pick the best answer for each.

You want OpenClaw to also respond when people message you on Telegram. Right now it only works on WhatsApp. What do you need to add?

Your AI responses have been getting much slower lately. Based on how OpenClaw works, where would you look first?

A friend builds a similar tool but runs their Gateway on a public cloud server so they can access it from anywhere. What is the main privacy difference from OpenClaw's default setup?

02

Meet the Four Players

Every message involves four distinct pieces of software. Here is who they are, what they do, and why they exist as separate things.

The cast of characters

OpenClaw is not one monolithic program — it is four cooperating players, each with a clear role. Understanding who does what is the key to understanding everything else.

🗼
The Gateway — "The Control Tower"

A background daemon that acts as the central hub for all traffic. It listens at ws://127.0.0.1:18789 using the WebSocket protocol. Every plugin and every app connects here. Nothing talks to anything else directly.

🔌
Channel Plugins — "The Translators"

One plugin per messaging platform. They live in the extensions/ folder of the codebase. Each plugin knows the peculiarities of its platform (Discord's slash commands, WhatsApp's phone-number-based IDs, Telegram's bot tokens) and translates incoming messages into OpenClaw's universal format.

🚦
The Router — "The Traffic Cop"

Lives in src/routing/resolve-route.ts. When the Gateway receives a message, the Router reads your config's binding rules and determines which Agent should handle it. This is what lets you have a "work agent" on Slack and a "personal agent" on WhatsApp simultaneously.

💡
The Agent — "The Thinker"

Runs the LLM, maintains conversation history, executes tools, and produces the final response. Powered by the Pi runtime. Each Agent has its own system prompt, memory settings, and tool permissions. You can have multiple Agents configured at once.

📋
Why four separate pieces?

Separation of concerns. If WhatsApp changes their API, only the WhatsApp plugin needs updating — the Gateway, Router, and Agent stay untouched. If you want to try a new AI model, you swap the Agent's LLM config — no plugins need changing. This is good software architecture.

The Gateway — your control tower

The Gateway is the heart of OpenClaw. It runs as a background process and holds every active connection open simultaneously. Here is the actual TypeScript type that defines how any program connects to it.

src/gateway/client.ts
export type GatewayClientOptions = {
  url?: string; // ws://127.0.0.1:18789
  connectDelayMs?: number;
  token?: string;
  bootstrapToken?: string;
  password?: string;
  clientName?: GatewayClientName;
  mode?: GatewayClientMode;
  scopes?: string[];
};
Plain English

url — The address where the Gateway is listening. Defaults to localhost:18789 — your own machine, not the internet.

connectDelayMs — How many milliseconds to wait before connecting. Prevents apps from hammering the Gateway if they restart rapidly.

token — A secret string that proves you are allowed to connect. Like a password, but automatically generated.

bootstrapToken — A temporary one-time token used during first-time setup, before a permanent token is issued.

password — An alternative to a token: connect using a plain password instead.

clientName — Identifies which kind of app is connecting: the macOS menu-bar app, a channel plugin, the CLI, the web UI, etc.

mode — Whether this client gets a full connection or a limited, restricted one.

scopes — A list of what this client is actually allowed to do once connected. Like permissions on a phone app.

🔍
The question mark means "optional"

In TypeScript, a ? after a property name means it is optional — the program will use a sensible default if you don't supply it. That is why you do not need to configure every option to get OpenClaw running: most defaults just work.

Every single program that connects to OpenClaw — the macOS menubar app, channel plugins, the web UI, your phone apps — uses this same GatewayClientOptions shape to establish its connection. The Gateway is the one place everything converges.

Channel Plugins — the translators

Discord speaks differently from WhatsApp. Telegram's IDs look nothing like Slack's. Every messaging platform has its own API, its own authentication system, its own message format. Channel plugins are the adapters that make everything speak the same language.

  • openclaw/
    • extensions/ — one sub-folder per messaging platform
      • discord/ — Discord bot + slash commands
      • whatsapp/ — WhatsApp via WhatsApp Web
      • telegram/ — Telegram bot API
      • slack/ — Slack bot via Socket Mode
      • signal/ — Signal messenger
      • matrix/ — Matrix federated protocol
      • bluebubbles/ — iMessage via BlueBubbles macOS app
      • msteams/ — Microsoft Teams
      • …and 60+ more extensions for AI providers, tools, and channels

Every channel plugin is built using a shared SDK — the plugin-sdk — so they all work the same way from the Gateway's perspective. Here is the real code that wires up one plugin:

extensions/bluebubbles/src/channel.ts
import { createChatChannelPlugin }
  from "openclaw/plugin-sdk/core";

export const bluebubblesPlugin =
  createChatChannelPlugin({
    base: {
      id: "bluebubbles",
      capabilities: {
        chatTypes: ["direct", "group"],
        media: true,
        reactions: true,
        reply: true,
      },
    },
    // ... messaging, config, actions
  });
Plain English

import { createChatChannelPlugin } — Pull in the standard blueprint from OpenClaw's SDK. Every chat-style plugin starts here — this one call sets up the skeleton.

createChatChannelPlugin({...}) — Fill in the blueprint with specifics. Like filling out a form: you provide your plugin's unique details, the SDK handles the common boilerplate.

id: "bluebubbles" — The unique internal name for this plugin. Used in config files and logs to identify which plugin is which.

capabilities: { chatTypes, media, reactions, reply } — What this messaging platform can actually do. Does it support group chats? Can it send images? React to messages? The SDK uses this to enable or disable features automatically.

The pattern repeats — Discord, WhatsApp, Telegram, Slack — they all call createChatChannelPlugin with their own specifics. Same skeleton, different platform details.

🧩
Plugins are just NPM packages

Each plugin in extensions/ is a standard Node.js package. When you install OpenClaw, the plugins you need are installed alongside it. When someone builds a new plugin for a new messaging app, they publish it as an NPM package and you install it like any other software.

Check your understanding

Three scenarios applying what you just learned about the four players.

You open the OpenClaw macOS menubar app while the Gateway is already running in the background. How does the app know what messages are coming in?

Someone sends you the exact same question on both Discord and Slack at the same moment. What happens?

A new messaging app called "Zync" launches and gets popular. Nobody has built OpenClaw support for it yet. You want to add it yourself. What would you need to create?

03
The Message Highway
How a chat message travels from your phone to an AI brain and back
Mail, But Instant

When you mail a letter, the post office doesn't care if it's a birthday card or a bill — it just routes it to the right address. OpenClaw's Gateway works the same way: it receives a message "envelope", checks who should handle it, and delivers it. Every message — whether it comes from Discord, WhatsApp, or the CLI — takes the exact same highway.

CH
Channel Plugin e.g. Discord
GW
Gateway ws://127.0.0.1:18789
RT
Router resolveAgentRoute()
AI
Agent LLM + Tools
Click "Next Step" to trace a message
CH
Channel Plugin — listens for messages on one platform (Discord, Telegram, etc.) and forwards them inward. It also receives replies and sends them back out.
GW
Gateway — the central post office. All WebSocket connections (from plugins, the CLI, the web UI) land here.
RT
Router — reads your config and decides which Agent handles the message. Creates a unique session key for the conversation.
AI
Agent — runs the LLM, executes tools (calendar, search, etc.), and streams the response back through the Gateway.
The Router — Reading the Config Map

The Router is like a GPS navigation system. It takes the incoming message details — which channel did it come from? who sent it? — and checks the config "map" to find the best route to an Agent.

CODE
export function resolveAgentRoute(input: ResolveAgentRouteInput): ResolvedAgentRoute {
const channel = normalizeToken(input.channel);
const accountId = normalizeAccountId(input.accountId);
const dmScope = input.cfg.session?.dmScope ?? "main";
const bindings = getEvaluatedBindingsForChannelAccount(
    input.cfg, channel, accountId
  );
const choose = (agentId: string, matchedBy: string) => {
const sessionKey = buildAgentSessionKey({
      agentId, channel, accountId, peer, dmScope,
    }).toLowerCase();
return { agentId, sessionKey, matchedBy };
PLAIN ENGLISH

A function that figures out which AI agent should handle an incoming message

Normalize the channel name (e.g. 'discord' → 'discord')

Normalize the account ID of who sent the message

Look up the session scope from config — 'main' means all DMs share one conversation

Get the routing rules (bindings) that apply to this channel and account

A helper: given an agent ID and a match reason, build the full route result

Build the session key — this is the unique ID for this conversation thread

Session Keys are like conversation thread IDs — they keep your chat history separate. Your Discord DM has a different session key than your Slack workspace, so each channel has its own memory. You could ask OpenClaw something on Discord, switch to Telegram, and it would start fresh — unless you configure them to share a session.
Your Config File — The Control Panel

The ~/.openclaw/openclaw.json file is like a switchboard operator's instruction card. It tells the Router: "If someone messages from Discord with this username, send it to this agent." Every decision OpenClaw makes about routing starts here.

CODE
{
"agent": { "model": "anthropic/claude-opus-4-6" },
"channels": {
"discord": {
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["yourname"]
}
},
"gateway": {
"bind": "loopback",
"port": 18789
}
}
PLAIN ENGLISH

The top of your config file

Which AI brain to use — Anthropic's Claude Opus model

Channel settings — configure each messaging platform here

Discord-specific config

Your Discord bot's secret password — the API token that lets OpenClaw control your bot (never share this)

Only respond to messages from 'yourname' — everyone else is ignored

Gateway settings — controls the internal WebSocket server

Only accept connections from this same machine — loopback = localhost only, no internet exposure

Listen on port 18789

1
Edit the config file — open ~/.openclaw/openclaw.json in any text editor
2
Add your channel credentials — paste your API token or bot token into the right channel block
3
Restart OpenClaw — the daemon picks up the new config on the next start
Check Your Understanding
You add yourself to allowFrom for Telegram, but messages from your friend still get responses. Why?
Two of your friends message OpenClaw at the same time from different platforms. What happens to their sessions?
You want a specific Discord server channel to always use a different AI agent (your "work assistant"). Where would you configure this?
04
The Plugin Machine
How OpenClaw connects to 25+ platforms without changing its core
Extensions — OpenClaw's Superpower

OpenClaw ships with 25+ channel plugins out of the box — Discord, WhatsApp, Telegram, Slack, Signal, iMessage, Matrix, and more. Each plugin lives in the extensions/ folder as its own self-contained package. Think of them like apps on your phone: OpenClaw is the OS, plugins are the apps.

extensions/
discord/ Discord server + DM integration
whatsapp/ WhatsApp Web (via QR code login)
telegram/ Telegram bot API
slack/ Slack workspace integration
anthropic/ Anthropic Claude LLM provider
openai/ OpenAI GPT LLM provider
google/ Google Gemini LLM provider
elevenlabs/ Text-to-speech (voice output)
deepgram/ Speech-to-text (voice input)
brave/ Brave Search web search tool
memory-lancedb/ Long-term memory storage
Each folder in extensions/ is an independent npm package. You can update, disable, or replace any plugin without touching any other part of OpenClaw.
Anatomy of a Plugin

Every channel plugin follows the same blueprint using createChatChannelPlugin from the Plugin SDK. It's like a standardized shipping container — the shape is always the same, but the contents differ. OpenClaw's core knows how to handle any container; it doesn't need to know what's inside.

CODE
import { createChatChannelPlugin } from "openclaw/plugin-sdk/core";
createChatChannelPlugin({
pairing: {
idLabel: "whatsappSenderId",
},
outbound: {
resolveTarget: ({ to, allowFrom, mode }) =>
resolveWhatsAppOutboundTarget({ to, allowFrom, mode }),
},
});
PLAIN ENGLISH

Import the standard blueprint all channel plugins use

Create a new channel plugin following the blueprint

Configuration for message pairing (how to identify senders)

Label the sender ID field 'whatsappSenderId' in the UI

Configuration for sending outgoing messages

Given a recipient, allowed senders list, and mode — figure out the actual WhatsApp number to send to

The Plugin SDK is a contract. OpenClaw's core doesn't need to know anything about WhatsApp specifically — it just calls "send message" and the plugin handles the rest. This is why adding a new channel doesn't require changing the core. The SDK defines the contract; plugins fulfill it.
LLM Provider Plugins

Not just messaging — LLM providers are also plugins! Anthropic, OpenAI, Google, Mistral, Groq... each lives in extensions/. This means you can swap AI brains by changing one line in your config. The Agent doesn't know or care which LLM it's using — it just asks the plugin to "think about this" and waits for an answer.

AN
Anthropic

Claude Opus, Sonnet, Haiku — most capable reasoning, lowest prompt-injection risk. Best for complex tasks requiring careful judgment.

anthropic/claude-opus-4-6
OA
OpenAI

GPT models — widely used, large context windows, huge ecosystem of tools and integrations built around them.

openai/gpt-4o
GG
Google Gemini

Multimodal from the ground up — handles images, audio, and video natively. Great when you need to send screenshots or photos to your assistant.

google/gemini-2.0-flash
GQ
Groq

Ultra-fast inference using specialized hardware. Ideal for low-latency voice responses where waiting 3 seconds feels like an eternity.

groq/llama-3.3-70b-versatile
Switch providers by changing ONE line: "agent": { "model": "openai/gpt-4o" } The rest of OpenClaw — routing, sessions, tools, memory — stays exactly the same. Only the AI brain changes.
Check Your Understanding
A new messaging platform called "NovaMail" launches. What would you need to build to add it to OpenClaw?
Your OpenClaw responses are great but you want faster replies for voice commands. You know Groq is ultra-fast. What's the minimum change needed?
The Anthropic plugin receives an updated model (claude-opus-5). You update the plugin package. Does the rest of OpenClaw need to change?
Module 5

Security — Your Castle, Your Rules

OpenClaw is designed so that your AI assistant is genuinely yours — not a service you're borrowing from someone else's infrastructure.

Screen 1 of 4

Local-First Is a Security Feature

Most cloud AI assistants work like a megaphone in a public square — every message you send travels across the internet to a company's server, gets processed, and comes back. Many hands touch your data in transit. OpenClaw's default approach is a drawbridge: closed to the outside world by default. Only you can cross it.

☁ Cloud AI (typical)
Your Phone
Internet
Company's Server
AI
Back to You
Everything is visible in transit. Your messages touch servers you don't control.
⚿ OpenClaw Default
Your Phone
Local Network
Your Computer
AI
Back to You
Nothing leaves your machine. The Gateway only listens on 127.0.0.1.

The key config that makes this possible is "bind": "loopback". When the Gateway binds to the loopback interface, it literally cannot receive connections from any other device on your network, let alone the internet.

🏰
Even if someone hacks your WiFi, they can't reach OpenClaw. The Gateway doesn't listen on your network interface — only on the loopback interface (127.0.0.1), which is invisible to other devices. A hacker sitting on your network would find nothing to connect to.
Screen 2 of 4

Who Can Talk to Your AI?

Even once you're inside the local network, OpenClaw layers three independent access controls. You have to pass all three to get a response from the AI.

1
Gateway Auth — The Front Door
Every client — your iOS app, your CLI, your Discord plugin — must present a secret auth token to connect. No token, no entry. Like needing a physical key to open the front door.
Connection attempt
Gateway checks token
Reject if invalid
|
Allow if valid
2
allowFrom — The Guest List
Channel plugins only forward messages from explicitly listed senders. If your Discord plugin's allowFrom list doesn't include a user's ID, their messages are silently ignored — the AI never even sees them.
Message arrives
Channel checks allowFrom
Ignore if not listed
|
Forward if listed
3
DM Pairing — The Doorbell Camera
When an unknown sender DMs your bot for the first time, OpenClaw sends them a one-time pairing code. They must reply with that code before any AI responses flow. This prevents random strangers from accidentally triggering your AI if they stumble across your bot's username.
Unknown sender DMs
Pairing code sent
No response until verified
|
Responses flow after
Code src/gateway/client.ts
export type GatewayClientOptions = {
  url?: string;     // ws://127.0.0.1:18789
  token?: string;
  clientName?: GatewayClientName;
  mode?: GatewayClientMode;
  scopes?: string[];
};
GatewayClientOptions
The configuration any client needs to connect to the Gateway
url?
The Gateway's address — defaults to localhost:18789
token?
A secret token proving you're authorized to connect
clientName?
Which app is connecting (CLI, iOS app, macOS app, web UI...)
mode?
Full control vs read-only vs limited access mode
scopes?
What this client is allowed to do (e.g. send messages, change config)
Screen 3 of 4

The Multi-Platform Architecture

OpenClaw's "secret weapon" is that all of its user interfaces are thin clients — they contain no intelligence of their own. Every app, every CLI command, every channel plugin is just a thin client pointing at the same Gateway. The Gateway is the source of truth. Whether you're on iOS, Android, macOS, or the web — you're looking at different windows into the same room.

🍎
macOS
SwiftUI menubar
📱
iOS
Native app
🤖
Android
Native app
🌐
Web UI
Browser app
⌨️
CLI
Terminal
🔌
Plugins
Discord, Slack…
WebSocket ↕ all directions
Gateway
ws://127.0.0.1:18789
💡
You can have the macOS app open AND the CLI AND your phone all connected simultaneously. They all see the same conversations in real time because they're all just different views into the same Gateway session. There's no syncing — they're sharing one live connection.
Code src/gateway/client.ts — clientName & mode
export type GatewayClientOptions = {
  // ...
  clientName?: GatewayClientName;
  mode?: GatewayClientMode;
  // ...
};
clientName
Identifies which app is connecting — e.g. "macos-app", "ios-app", "cli", "web-ui". The Gateway uses this for logging and to route UI-specific events to the right client.
mode
Controls the level of access granted. The iOS app might connect in a limited mode that doesn't allow config changes, while the CLI uses full control mode.
Screen 4 of 4

Quiz — Test Your Security Knowledge

Three scenarios. Think through what you've learned about OpenClaw's security model.

A colleague wants to connect to your OpenClaw remotely over the internet. What would need to change from the default config?
Someone gets your Discord bot token. What can they do?
You open the OpenClaw iOS app while also having the macOS app open. How does the iOS app know about recent conversations?
Module 6

Reading the Whole Map

Zoom out. See every component, how they connect, and how to navigate the system when something goes wrong.

Screen 1 of 4

The Big Picture

Every concept from this course in one diagram. This is the complete OpenClaw architecture — all the zones, all the connections, all the data flows.

Your Apps
macOS
iOS
Android
Web
CLI
↓ WebSocket
Gateway
ws://127.0.0.1:18789
Channel Plugins
Discord • WhatsApp • Telegram • Slack • Signal • iMessage • Matrix
Agent Runtime
LLM Provider • Browser • Search • Memory • Tools
↓ External APIs
External Services
Anthropic
OpenAI
Discord API
WhatsApp
🗺️
How to read this diagram: Data flows in from the top (your apps and channel plugins) and in from the bottom (external services like Discord's API sending you messages). Everything passes through the Gateway in the middle — it's the nervous system of the whole setup.
Screen 2 of 4

How Processes Work Together

OpenClaw is actually multiple Node.js processes working together. Think of it like a restaurant kitchen: the main process (front of house) takes orders and passes them to worker processes (kitchen stations) that handle the actual work.

🚪 Parent Process (CLI)
Reads your command, builds a "spawn plan" with the right arguments and environment, then launches the child. Stays alive to manage the child's lifecycle — like a front-of-house manager.
⚙️ Child Process (Agent)
Does the actual AI work — connects to LLM providers, executes tools, handles conversation state. If it crashes or uses too much memory, it fails in isolation without affecting the parent.
Code src/entry.ts
const child = spawn(process.execPath, plan.argv, {
  stdio: "inherit",
  env: plan.env,
});
attachChildProcessBridge(child);
spawn(...)
Launch a new child process using Node.js (process.execPath = the Node binary)
plan.argv
Pass in the correct arguments for this spawn plan
stdio: "inherit"
The child process shares our terminal (same input/output)
env: plan.env
Pass the environment variables the child process needs
attachChildProcessBridge
Set up a communication bridge between this process and the child
💡
Why multiple processes? Isolation. If the AI agent runs into a memory-heavy operation or an unexpected error, it fails in its own process — not yours. The parent process can detect the failure and restart cleanly. Your terminal session stays alive regardless of what the agent does.
Screen 3 of 4

Where to Look When Things Go Wrong

The openclaw doctor command is your first stop when anything breaks — like a system diagnostic check that inspects every component and reports what it finds.

1
openclaw doctor
Checks Gateway connection, channel plugin status, and model connectivity. Gives you a traffic-light summary of what's healthy and what isn't.
2
tail -f /tmp/openclaw-gateway.log
Live log stream — watch events in real time as messages arrive, routes are resolved, and errors occur. Essential for understanding what OpenClaw is actually doing.
3
openclaw channels status --probe
Actively tests each channel plugin connection — not just "is it configured?" but "can it actually reach Discord/WhatsApp/etc. right now?"
4
openclaw gateway --verbose
Restart the Gateway with verbose logging enabled. Every routing decision, every auth check, every message hop gets written to the log.

Common Problems & Where to Look

AI not responding
Check your LLM provider API key, then run openclaw doctor
Messages not arriving from Discord
Check the Discord bot token in config; check allowFrom list contains your user ID
Gateway won't start
Check if port 18789 is already in use: ss -ltnp | grep 18789 or lsof -i :18789
macOS app can't connect
Verify the Gateway is running (openclaw doctor); check the auth token in the app's settings matches the Gateway's config
⚠️
Never share your Gateway token or channel API tokens. The Gateway token gives full control of your AI assistant. Channel tokens give access to your messaging accounts. All sensitive config lives in ~/.openclaw/ — a directory that is only readable by your own user account.
Screen 4 of 4

Final Quiz — Pulling It All Together

Four questions that span the whole course. These are a little harder — they test your understanding of how the pieces interact.

You want to build a new OpenClaw feature that sends a daily briefing to your WhatsApp every morning. Which part of the system would handle the scheduling?
A developer friend forks OpenClaw and changes the Gateway's WebSocket protocol. Your existing iOS app stops working. Why?
You notice your OpenClaw is using a lot of memory. The logs show the AI agent process is the culprit. What does this tell you about the architecture?
Someone asks: "Can I run OpenClaw on a cloud VM and connect to it from my phone?" What's the accurate answer?
🎓
You've reached the end of the OpenClaw course. You now understand the full architecture: Gateway as the central hub, channel plugins as messaging bridges, the router mapping messages to agents, the agent calling LLM providers and tools, and the security model keeping everything private. Welcome to the map.