How to Run Local Ollama Models in OpenClaw on a VPS Using SSH Reverse Tunneling

Access your laptop's powerful local LLMs (Gemma 4, Qwen2.5, GLM-4.7 Flash, etc.) directly from OpenClaw's web UI and CLI — no public exposure, no broken Venice models.

April 2026 • Tutorial • 5 min read

🎯 Why This Setup Rocks

OpenClaw is fantastic for managing multiple AI providers in one clean interface. But sometimes you want the best of both worlds:

The secret? A simple SSH reverse tunnel + a clean entry in openclaw.json. Your local models appear in the dropdown with nice names like "Gemma 4 26B (Local)" and switch instantly.

Here's exactly how to do it safely.

Step 1: Set Up the SSH Reverse Tunnel

From your laptop (where Ollama is already running), create the tunnel to your VPS:

ssh -L 18789:localhost:18789 -R 11434:localhost:11434 user@your-vps-ip

-R 11434:localhost:11434 is the key — it makes the VPS able to reach your local Ollama instance on port 11434.

Keep this SSH session open (or use autossh / a systemd service for persistence).

Test it from the VPS:

curl http://127.0.0.1:11434/api/tags

You should see your Ollama models listed.

Step 2: Add the Ollama Provider to openclaw.json

OpenClaw makes it easy to add local providers without touching your existing Venice (or other) models.

Here's the exact block you want to add (or merge) under the providers section:

{
  "provider": "ollama",
  "base_url": "http://127.0.0.1:11434",
  "local": true,
  "models": [
    {
      "id": "gemma4:26b",
      "name": "Gemma 4 26B (Local)",
      "context_length": 128000
    },
    {
      "id": "qwen2.5:32b",
      "name": "Qwen2.5 32B (Local)",
      "context_length": 128000
    },
    {
      "id": "qwen2.5:7b",
      "name": "Qwen2.5 7B (Local)",
      "context_length": 128000
    },
    {
      "id": "glm-4.7-flash:latest",
      "name": "GLM-4.7 Flash (Local)",
      "context_length": 128000
    }
  ]
}

💡 Pro Tip

Always back up your openclaw.json first and restart OpenClaw after editing.

Step 3: Verify Everything Works

Run this on the VPS:

openclaw models list

You should now see something like this (your Venice models stay untouched):

venice/zai-org-glm-4.7-flash ... alias:GLM 4.7 Flash
...
ollama/gemma4:26b ... alias:Gemma 4 26B (Local)
ollama/qwen2.5:32b ... alias:Qwen2.5 32B (Local)
ollama/qwen2.5:7b ... alias:Qwen2.5 7B (Local)
ollama/glm-4.7-flash:latest ... alias:GLM-4.7 Flash (Local)

Refresh your OpenClaw web UI — the local models will now appear in the model dropdown exactly as named.

Switching Models

That's it. Your local models now work exactly like any other provider.

Bonus Tips

The bottom line: This setup gives you the privacy and performance of fully local models while keeping the polished OpenClaw experience on your VPS. No more choosing between "cloud convenience" and "local power" — you get both.

Have you tried this setup yet? Drop your results or any tweaks in the comments!

💬 Comments

Email is required for anti-spam but can be fake if you prefer privacy.

Loading comments...
// Initialize engagement if (typeof EngagementSystem !== 'undefined') { EngagementSystem.initLikeButton('#like-container', 'blog', 'openclaw-ssh-reverse-tunnel'); EngagementSystem.initSubscribeForm('#subscribe-container', { title: '📧 Subscribe for Blog Updates', description: 'Get notified when new articles are published.' }); }