OpenClaw AI Gateway - 1

OpenClaw AI Gateway

Deploy OpenClaw AI messaging gateway on reComputer, with optional local AI model for privacy-first conversations.

Beginner15minLLM
dockerLLMai-gatewayedge-aiJetson

What This Solution Does

Want to use AI chatbots on WeChat, Telegram, or Discord — but don't want to juggle multiple integrations or worry about data privacy? OpenClaw connects 20+ messaging apps to any AI model through one simple gateway, running entirely on your own reComputer device.

Core Value

BenefitDetails
One gateway, 20+ platformsWeChat, Telegram, Discord, Slack, DingTalk, Feishu, and more — manage all from one place
Switch AI models freelyUse OpenAI, DeepSeek, or any compatible AI service — switch anytime without reconfiguration
Your data stays with youEverything runs on your own device — no conversations sent to third-party services
Optional local AIUse device GPU to run AI models locally — fully offline conversations with zero privacy concerns

Use Cases

ScenarioHow It Works
Personal AI assistantConnect your WeChat or Telegram to ChatGPT/DeepSeek — chat with AI right in your messaging app
Team chatbotSet up a shared AI assistant in your Slack or Discord workspace for the whole team
Privacy-first AIRun AI models locally on your device so conversations never leave your network
Edge AI on JetsonDeploy on reComputer Jetson for GPU-accelerated local AI with messaging integration

What You Need

Inputs and Outputs

  • Input: Chat messages (WeChat, Telegram, Discord, Slack, etc. — 20+ platforms)
  • Output: AI responses sent back to the corresponding chat platform

Configuration

  • Enter the target AI model's API key or local model address
  • Enter bot token / webhook for each messaging platform

Deployment Comparison

OptionAI ModeCore DeviceBest For
OpenClaw AI Compute GatewayLocal AI modelreComputer JetsonPrivacy-first, offline environments, high concurrent chats
OpenClaw GatewayCloud AI providersreComputer R1100 / R2000Lightweight setup, stable network, cost-conscious

Performance and Cost Reference

  • Local AI model (Jetson): 2B model 20+ tokens/sec; 4B/7B slower (limited by device compute); no token fees
  • Cloud AI service (R1100/R2000): Response speed depends on cloud service; pay per LLM API call

Integration Interfaces

http

AI messaging gateway API (multi-provider)

/ · Port: 11434 · Method: POST

Deployment Options

Download & Install

Preset: OpenClaw AI Compute Gateway {#openclaw_basic}

Deploy OpenClaw AI messaging gateway, with optional local AI model powered by your device's GPU.

DevicePurpose
reComputer JetsonRuns OpenClaw gateway and local AI model with GPU acceleration

What you'll get:

  • AI chatbot gateway supporting 20+ messaging platforms
  • Optional local AI model running on device GPU — no data leaves your network
  • Web management interface for configuration

Requirements: Docker installed · Internet access (for first-time image download)

Step 1: Deploy OpenClaw {#deploy_openclaw type=docker_deploy required=true config=devices/local.yaml}

Deploy the OpenClaw AI gateway. If local AI model is enabled, it will be started and configured automatically.

Target: Local Deployment {#local type=local config=devices/local.yaml default=true}

Deploy on your reComputer Jetson device (running this software locally on the Jetson).

Wiring

  1. Ensure Docker is installed and running
  2. Optionally check Enable Local AI Model and select a model
  3. Click Deploy to start services

Deployment Complete

  1. Copy the Gateway Token shown in the deployment log
  2. Open http://localhost:18789 in your browser
  3. Go to the Overview page (left sidebar → Overview under "Control")
  4. In the Gateway Access section, paste the token into the Gateway Token field
  5. Click Connect to authenticate
  6. Connect your first messaging channel (WeChat, Telegram, Discord, etc.)
  7. If local AI model is enabled, it's already configured — select it when creating an agent

Troubleshooting

IssueSolution
Port 18789 already in useStop the service occupying port 18789, or check if OpenClaw is already running
Docker not foundInstall Docker Desktop and ensure it is running
Model download slowLarge models take time; check your internet connection
OpenClaw container keeps restartingCheck logs: docker logs openclaw-gateway

Target: Remote Deployment (Jetson) {#jetson_remote type=remote config=devices/jetson.yaml}

Deploy to a reComputer Jetson device over SSH, with GPU-accelerated local AI model.

Wiring

  1. Connect reComputer Jetson to the same network as your deployment machine
  2. Enter Jetson IP address, SSH username, and password
  3. Optionally check Enable Local AI Model and select a model
  4. Click Deploy to start services

Resource requirements: OpenClaw only needs 12GB disk + 4GB RAM; with Ollama needs 20GB disk + 8GB RAM.

Deployment Complete

  1. Copy the Gateway Token shown in the deployment log
  2. OpenClaw requires localhost access. Option A: Connect a display to the Jetson, open a browser, and visit http://localhost:18789; Option B: On your computer, run ssh -L 18789:localhost:18789 <username>@<jetson-ip>, then open http://localhost:18789 in your local browser
  3. Go to the Overview page (left sidebar → Overview under "Control")
  4. In the Gateway Access section, paste the token into the Gateway Token field
  5. Click Connect to authenticate
  6. Connect your first messaging channel
  7. If local AI model is enabled, it's already configured with GPU acceleration — select it when creating an agent

Troubleshooting

IssueSolution
SSH connection failedVerify Jetson IP, username, password, and that SSH service is running
NVIDIA runtime not detectedEnsure NVIDIA container runtime is installed: nvidia-smi should work
Docker Compose unavailableInstall it: sudo apt-get install -y docker-compose-plugin
Model download slowFirst download gets the full model; subsequent runs use cache
Not enough disk spaceOpenClaw only needs at least 12GB free disk; enabling Ollama raises that to 20GB. Check with df -h /
Not enough system memoryOpenClaw only needs at least 4GB RAM; enabling Ollama raises that to 8GB. Check with awk '/^MemTotal:/ {print int(($2 + 1048575) / 1048576) "GB"}' /proc/meminfo
Port 11434 already in useThe deployer prefers its own Docker-managed Ollama runtime. It will try to stop a native Ollama first; if the port is still occupied, stop the other Ollama service/process on the Jetson and retry

Preset: OpenClaw Gateway {#openclaw_recomputer_r}

Deploy OpenClaw AI messaging gateway on reComputer R series. Lightweight deployment — gateway only, no local AI model needed.

DevicePurpose
reComputer R1100 / R2000Runs OpenClaw gateway services

What you'll get:

  • Chat with AI across 20+ messaging platforms (WeChat, Telegram, Discord, etc.)
  • Manage all channels from a single web interface
  • Connect cloud AI providers (OpenAI, Claude, etc.) for conversations

Requirements: Docker installed · Internet access (for first-time image download)

Step 1: Deploy OpenClaw {#deploy_openclaw_r type=docker_deploy required=true config=devices/recomputer_r_local.yaml}

Deploy the OpenClaw AI gateway on your reComputer R.

Target: Local Deployment {#r_local type=local config=devices/recomputer_r_local.yaml default=true}

Deploy on the machine you're currently using.

Wiring

  1. Ensure Docker is installed and running
  2. Click Deploy to start services

Deployment Complete

  1. Copy the Gateway Token shown in the deployment log
  2. Open http://localhost:18789 in your browser
  3. Go to the Overview page (left sidebar → Overview under "Control")
  4. In the Gateway Access section, paste the token into the Gateway Token field
  5. Click Connect to authenticate
  6. Connect your first messaging channel (WeChat, Telegram, Discord, etc.)

Troubleshooting

IssueSolution
Port 18789 already in useStop the service occupying port 18789, or check if OpenClaw is already running
Docker not foundInstall Docker Desktop and ensure it is running
OpenClaw container keeps restartingCheck logs: docker logs openclaw-gateway

Target: Remote Deployment {#r_remote type=remote config=devices/recomputer_r_remote.yaml}

Deploy to a reComputer R device over SSH.

Wiring

  1. Connect reComputer R to the same network as your deployment machine
  2. Enter device IP address, SSH username, and password
  3. Click Deploy to start services

Deployment Complete

  1. Copy the Gateway Token shown in the deployment log
  2. OpenClaw requires localhost access. Option A: Connect a display to the device, open a browser, and visit http://localhost:18789; Option B: On your computer, run ssh -L 18789:localhost:18789 <username>@<device-ip>, then open http://localhost:18789 in your local browser
  3. Go to the Overview page (left sidebar → Overview under "Control")
  4. In the Gateway Access section, paste the token into the Gateway Token field
  5. Click Connect to authenticate
  6. Connect your first messaging channel

Troubleshooting

IssueSolution
SSH connection failedVerify device IP, username, password, and that SSH service is running
Docker Compose unavailableInstall it: sudo apt-get install -y docker-compose-plugin
Not enough disk spaceNeed at least 4GB free; check with df -h /
OpenClaw container keeps restartingCheck logs: docker logs openclaw-gateway

Deployment Complete

OpenClaw AI gateway is deployed. Follow the "Deployment Complete" instructions in the step above to log in, then start using it.

Try a Conversation

  1. Click Chat in the left sidebar and send a message to verify AI is responding
  2. If local AI model is enabled, it's already configured — no extra setup needed

Connect Messaging Platforms

  1. Click Channels in the left sidebar to add your messaging platform (WeChat, Telegram, Discord, etc.)
  2. Follow the on-screen instructions to authorize the platform
  3. Send a test message through the connected platform

Next Steps

Contact Us
We Are Glad to Be Your Hardware Partner !
Have you used our products before?