Get OllamaHelm up and running in under a minute. All you need is Docker and a running Ollama instance.
Create a new directory and add a
docker-compose.yml file:
services:
app:
build: .
ports:
- "${APP_PORT:-8000}:8000"
volumes:
- ollamahelm-storage:/app/storage
environment:
- APP_ENV=${APP_ENV:-production}
- APP_DEBUG=${APP_DEBUG:-false}
- APP_URL=${APP_URL:-http://localhost:${APP_PORT:-8000}}
- OLLAMA_DEFAULT_HOST=${OLLAMA_DEFAULT_HOST:-http://host.docker.internal:11434}
extra_hosts:
- "host.docker.internal:host-gateway"
restart: unless-stopped
volumes:
ollamahelm-storage:
By default, OllamaHelm connects to Ollama at
http://host.docker.internal:11434. Override this with the
OLLAMA_DEFAULT_HOST environment variable.
docker compose up -d Open http://localhost:8000 in your browser. Register a new account — the first user is automatically created. You'll land on the dashboard.
Navigate to Servers in the sidebar and click Add Server. Enter your Ollama server's URL (e.g.,
http://host.docker.internal:11434
for a local instance). Click Test Connection to verify, then save.
Go to Discover in
the sidebar to browse the Ollama model library. Find a model
you like (try
llama3.2), click it, and hit Pull. Track progress in the Downloads page.
Head to Chat in the sidebar. Select a model from the dropdown and start a conversation. You'll see streaming responses with thinking tokens, generation stats, and time-to-first-token tracking.
To unlock unlimited servers, fleet monitoring, and Model Factory:
ollamahelm/ollamahelm-pro:latest OLLAMAHELM_LICENSE_KEY=your-key-here docker compose pull && docker compose up -d You can also enter your license key in the app at Settings → License.