Vibe Co-PilotOllama · Private Local AI
Back to Vibe Co-Pilot

Ollama

Run Vibe Browser with 100% private, local models via Ollama. Your browsing data, DOM snapshots, and prompts never leave your machine.

Quick Start

1Install Ollama

Download and install from ollama.com. Single binary, no Docker or Python needed.

2Pull a Model

Browser automation requires strong reasoning and tool-use. We recommend Qwen 3.5 for the best experience.

# Recommended for Vibe Browser
ollama pull qwen3.5

# Lighter alternative
ollama pull llama3.1:8b

# Smallest, great for testing
ollama pull smollm2:1.7b

3Connect Vibe Browser

Open Vibe Browser Settings, select Ollama (Self-Hosted) as your provider. It auto-connects to localhost:11434 and shows all your installed models.

Why use Ollama with Vibe Browser?

Absolute Privacy

Your browsing history, DOM snapshots, and prompts never leave your local machine. No cloud telemetry, no data harvesting.

Offline Automation

Automate intranet sites, local dev servers, and offline documents without needing an active internet connection.

Zero API Costs

Run unlimited automation tasks and process millions of tokens without worrying about usage limits or API billing.

Native Integration

Vibe Browser connects directly to Ollama's native API with full support for tool-calling capabilities out of the box.

Model Discovery

Vibe Browser automatically discovers models from your local Ollama instance. When you open the model dropdown in settings, you see all installed models directly queried from the /api/tags endpoint.

If you select a model from the recommended list that isn't installed yet, Vibe Browser automatically triggers a background pull via /api/pull to download it.

Recommended Models

ModelDescription
qwen3.5Default pick -- strong reasoning and tool use
llama3.1:8bMeta's general-purpose model
deepseek-r1:8bStrong reasoning and math
gemma3:4bGoogle's efficient model
smollm2:1.7bSmallest, runs on anything

Model size and memory footprint vary by quantization and runtime settings. Browse all models at ollama.com/library.

Troubleshooting

Connection Refused

If Vibe Browser cannot connect, ensure the Ollama service is running:

curl http://localhost:11434

You should see: Ollama is running. If not, run ollama serve to start it.

CORS Issues

If running Ollama on a different machine, set the OLLAMA_ORIGINS environment variable:

OLLAMA_ORIGINS="*" ollama serve

Then update the Base URL in Vibe Browser settings to match your machine's IP (e.g. http://192.168.1.100:11434).

Model Runs Slowly

Memory requirements vary by model and quantization (for example Q4 vs F16). Try a smaller model ( qwen3:4b or smollm2:1.7b), close other apps to free RAM, or use quantized (Q4) variants for lower memory usage.

Ready to run private AI in your browser?

Install Vibe Browser, connect Ollama, and start automating -- with zero data leaving your machine.