Access your laptop's powerful local LLMs (Gemma 4, Qwen2.5, GLM-4.7 Flash, etc.) directly from OpenClaw's web UI and CLI — no public exposure, no broken Venice models.
OpenClaw is fantastic for managing multiple AI providers in one clean interface. But sometimes you want the best of both worlds:
The secret? A simple SSH reverse tunnel + a clean entry in openclaw.json. Your local models appear in the dropdown with nice names like "Gemma 4 26B (Local)" and switch instantly.
Here's exactly how to do it safely.
From your laptop (where Ollama is already running), create the tunnel to your VPS:
ssh -L 18789:localhost:18789 -R 11434:localhost:11434 user@your-vps-ip
-R 11434:localhost:11434 is the key — it makes the VPS able to reach your local Ollama instance on port 11434.
Keep this SSH session open (or use autossh / a systemd service for persistence).
curl http://127.0.0.1:11434/api/tags
You should see your Ollama models listed.
OpenClaw makes it easy to add local providers without touching your existing Venice (or other) models.
Here's the exact block you want to add (or merge) under the providers section:
{
"provider": "ollama",
"base_url": "http://127.0.0.1:11434",
"local": true,
"models": [
{
"id": "gemma4:26b",
"name": "Gemma 4 26B (Local)",
"context_length": 128000
},
{
"id": "qwen2.5:32b",
"name": "Qwen2.5 32B (Local)",
"context_length": 128000
},
{
"id": "qwen2.5:7b",
"name": "Qwen2.5 7B (Local)",
"context_length": 128000
},
{
"id": "glm-4.7-flash:latest",
"name": "GLM-4.7 Flash (Local)",
"context_length": 128000
}
]
}
Always back up your openclaw.json first and restart OpenClaw after editing.
Run this on the VPS:
openclaw models list
You should now see something like this (your Venice models stay untouched):
venice/zai-org-glm-4.7-flash ... alias:GLM 4.7 Flash
...
ollama/gemma4:26b ... alias:Gemma 4 26B (Local)
ollama/qwen2.5:32b ... alias:Qwen2.5 32B (Local)
ollama/qwen2.5:7b ... alias:Qwen2.5 7B (Local)
ollama/glm-4.7-flash:latest ... alias:GLM-4.7 Flash (Local)
Refresh your OpenClaw web UI — the local models will now appear in the model dropdown exactly as named.
/ then model to access model switching options.That's it. Your local models now work exactly like any other provider.
autossh or a systemd service on your laptop so the tunnel survives reboots.models array.The bottom line: This setup gives you the privacy and performance of fully local models while keeping the polished OpenClaw experience on your VPS. No more choosing between "cloud convenience" and "local power" — you get both.
Have you tried this setup yet? Drop your results or any tweaks in the comments!
💬 Comments