Preset: Deploy Reachy Voice Robot {#default}
Deploy the full voice conversation stack on your Jetson device. The robot will listen, think with a local AI model, speak, and express emotions — all under 1 second latency.
| Device | Purpose |
|---|
| NVIDIA Jetson Orin NX 16GB | Runs AI conversation, speech processing, and robot control |
| Reachy Mini | Desktop robot with arms, head, antennas, and camera |
What gets deployed:
- Robot Control — motor, camera, and sensor management
- Conversation Engine — AI dialogue + emotion system + web dashboard
- Vision Analysis — face detection, emotion recognition, and person tracking (GPU-accelerated)
- Local AI Model — powers the robot's thinking ability (auto-installed if not present)
Prerequisites:
- Reachy Mini connected to Jetson via USB
- Jetson with JetPack 6.x, SSH access, and internet
Step 1: Deploy Speech Service {#speech_service type=docker_deploy required=true config=devices/speech.yaml}
Deploy the GPU-accelerated speech recognition (ASR) and voice synthesis (TTS) service. The pre-built image includes all dependencies and models — just pull and run.
Target: Remote Deployment {#speech_remote type=remote config=devices/speech.yaml default=true}
Deploy to your Jetson over SSH with one click.
Wiring
- Connect your Jetson to the network
- Enter the Jetson's IP address and SSH credentials
- Click Deploy — the system will pull the pre-built image and start the service automatically
Deployment Complete
Speech service is running at http://<jetson-ip>:8621. Quick test:
# Check service health
curl http://<jetson-ip>:8621/health
# Expected: {"asr": true, "tts": true, "streaming_asr": true}
Troubleshooting
| Issue | Solution |
|---|
| SSH connection failed | Verify the IP address and credentials. Try ssh username@ip from your computer first |
| Image pull slow | The image is ~8GB compressed. Ensure stable internet on the Jetson |
| Service not starting | Check logs: ssh user@ip "cd jetson-voice && docker compose logs" |
| Health check fails | First startup takes ~40 seconds for model warmup. Wait and retry |
Target: Local Deployment {#speech_local type=local config=devices/speech_local.yaml}
Deploy directly on the current machine (requires NVIDIA GPU).
Wiring
- Ensure Docker and NVIDIA Container Toolkit are installed
- Click Deploy to start installation
Note: First startup may take 10-15 minutes for Docker image download and model initialization.
Deployment Complete
Speech service is running at http://localhost:8621. Quick test:
# Check service health
curl http://localhost:8621/health
# Expected: {"asr": true, "tts": true, "streaming_asr": true}
Troubleshooting
| Issue | Solution |
|---|
| NVIDIA runtime not found | Install NVIDIA Container Toolkit: sudo apt install nvidia-container-toolkit && sudo systemctl restart docker |
| Port 8621 already in use | Stop existing services on port 8621 |
| Container keeps restarting | Check logs: docker logs jetson-voice-speech-1 |
| Health check fails | First startup takes ~40 seconds for model warmup. Wait and retry |
Step 2: Deploy Reachy Voice Robot {#reachy_deploy type=docker_deploy required=true config=devices/reachy.yaml}
Deploy the robot control, conversation, and vision services to your Jetson. The system will automatically install Ollama and pull the AI model if not present.
Target: Remote Deployment {#reachy_remote type=remote config=devices/reachy.yaml default=true}
Deploy to your Jetson over SSH with one click.
Wiring
- Connect Reachy Mini to Jetson via USB cable
- Ensure the Jetson is on the network and SSH is accessible
- Enter the Jetson's IP address and SSH credentials
- Configure the data directory (default:
~/reachy-data) for captures and face database
- Choose the Vision Backend:
- Local (default) — runs
vision-trt on this Jetson. Needs the USB camera attached and builds TensorRT engines on first boot (~3-5 min).
- Cloud — skips the local
vision-trt container and points the robot at a remote vision service. Fill in the Vision Service URL (e.g. tcp://192.168.1.50:8631) when selecting this option.
- Optionally enable Kiosk Mode to auto-launch the dashboard fullscreen on boot
- Click Deploy — the system will:
- Install Ollama and pull the AI model if not present (~1.5 GB)
- Pull and start robot control, conversation, and (for Local backend) vision services
Deployment Complete
The robot should start talking within 30 seconds after deployment. Open the dashboard to monitor activity:
http://<jetson-ip>:8640
Default mode: Monologue — the robot automatically generates "inner thoughts" every 5 seconds. No user interaction needed.
To check all services are running:
ssh user@<jetson-ip> "docker ps --format 'table {{.Names}}\t{{.Status}}'"
Troubleshooting
| Issue | Solution |
|---|
| Model download slow | The AI model is ~1.5 GB. Ensure stable internet. Progress shows in deployment logs |
| Slow reply (>10 s) | Ollama fell back to CPU. GPU is auto-configured by the deployer; if the issue persists: sudo systemctl restart ollama |
| Robot not moving | Check USB connection. Try replugging the USB cable and restart: docker restart reachy-daemon |
| No audio output | Verify Reachy Mini's built-in speaker is working. Check audio.device in config |
| Dashboard not loading | Wait 30 seconds for startup. Check: curl http://<jetson-ip>:8640/health |
| No camera feed | Vision service builds TRT engines on first boot (~5 min). Check: docker logs vision-trt |
| Camera not found on boot | USB camera takes 15-30s to enumerate. Vision service retries automatically (~90s) |
| Camera drops after hours | USB power-management regression. A udev rule disabling autosuspend is installed by the deployer; if it recurs, physically replug the Reachy USB cable |
Target: Local Deployment {#reachy_local type=local config=devices/reachy_local.yaml}
Deploy directly on the current machine (requires NVIDIA Jetson with Reachy Mini connected).
Wiring
- Connect Reachy Mini to the machine via USB cable
- Ensure Docker and NVIDIA Container Toolkit are installed
- Click Deploy to start installation
Note: First startup may take 5-10 minutes for Docker image download and model initialization.
Deployment Complete
The robot should start talking within 30 seconds after deployment. Open the dashboard to monitor activity:
http://localhost:8640
Troubleshooting
| Issue | Solution |
|---|
| NVIDIA runtime not found | Install NVIDIA Container Toolkit: sudo apt install nvidia-container-toolkit && sudo systemctl restart docker |
| Robot not moving | Check USB connection. Try replugging the USB cable and restart: docker restart reachy-daemon |
| Dashboard not loading | Wait 30 seconds for startup. Check: curl http://localhost:8640/health |
| No camera feed | Vision service builds TRT engines on first boot (~5 min). Check: docker logs vision-trt |
Deployment Complete
Your Reachy Mini voice robot is now running!
What's Happening
The robot is in Monologue Mode — it automatically generates thoughts and speaks them out loud every few seconds, with matching head movements and antenna expressions. This is ideal for exhibitions and demos.
Service Overview
| Service | Port | Purpose |
|---|
| Robot Control | 38001 | Motor, camera, and sensor management |
| Conversation Engine | 8640 | AI dialogue + emotion system + dashboard |
| Vision Analysis | 8630 | Face detection, emotion recognition, person tracking |
| Local AI Model | 11434 | Powers the robot's thinking ability |
| Speech Service | 8621 | Listens and speaks (deployed in Step 1) |
Next Steps