Open Source — AGPL-3.0

Manage your Ollama fleet from any browser

The self-hosted control plane for Ollama AI servers. Monitor GPU resources, deploy models, and chat — across every server, from one dashboard.

localhost:8000/dashboard
OllamaHelm Dashboard showing real-time server metrics, running models, and resource usage

Web-Based

Access from any browser. No desktop app to install. Works on tablets and remote machines.

Multi-Server Fleet

Manage 2 or 200 Ollama servers from one dashboard. No other tool does this.

Self-Hosted

Your servers, your data, your network. Docker Compose up and you're running.

Open Source

AGPL-3.0. Read every line of code. No vendor lock-in. Community-driven.

Features

Everything you need to manage Ollama

From model management to fleet monitoring, OllamaHelm provides a complete toolkit for your local AI infrastructure.

Fleet Monitoring

PRO

See every server at a glance. VRAM usage, disk space, running models, and connection status across your entire fleet — updated in real time.

Fleet monitoring dashboard showing multi-server VRAM and disk usage

Live Dashboard

Real-time metrics for your active server. Running models, memory allocation, disk usage, and recent activity — all on one screen.

Dashboard with live server metrics and running model information

Streaming Chat

Chat with any model on any server. Thinking tokens, generation stats, time-to-first-token, conversation history, and export — all built in.

Chat interface with streaming responses and code highlighting

Model Library

Browse the full Ollama model library. Filter by capability — vision, tools, thinking, code. See which models are already installed across your servers.

Model library with capability badges and installed indicators

Model Factory

PRO

Build custom models with tailored system prompts and inference parameters. AI-assisted prompt generation. Deploy to multiple servers at once with status tracking.

Create custom models with AI-generated prompts and deploy across your fleet

Download Manager

Pull models with real-time progress bars and speed tracking. Queue multiple downloads across different servers. Cancel, retry, and track history.

Download manager showing model pull progress with speed and ETA

Server Management

Add, test, and organize your Ollama server connections. Encrypted credentials, connection testing, enable/disable without deleting. Import and export configurations.

Server management page with connection status and details

OllamaHelm vs. the alternatives

See how OllamaHelm compares to other Ollama management tools.

Feature OllamaHelm OllaMan Open WebUI LM Studio
Web-based (any browser)
Multi-server management Unlimited (Pro)
Fleet monitoring dashboard Pro
Model library & install
Streaming chat
Custom model creation Pro
SSO / OIDC Enterprise Enterprise
RBAC (role-based access) Enterprise Enterprise
Open source AGPL-3.0 Custom
Self-hosted Docker
Pricing Free / $99 perpetual $9.90 - $19.90 Free / Enterprise Free / Enterprise

Simple, transparent pricing

Free for hobbyists. Pay once, keep it forever. Updates and support for 1 year.

Free

Perfect for solo developers and home labs.

$0 /forever
Get Started
  • Up to 3 servers
  • Dashboard with live metrics
  • Model browsing & install
  • Streaming chat with thinking tokens
  • Download manager
  • Local auth with 2FA
RECOMMENDED

Pro

For teams managing shared Ollama infrastructure.

$99 /perpetual

or $5/mo subscription

$49 perpetual for early adopters
Upgrade to Pro
  • Everything in Free
  • Unlimited servers
  • Fleet monitoring dashboard
  • Model Factory + multi-server deployment
  • Server import/export
  • Deployment audit trail
  • Lifetime updates — all future versions included
  • 1 year of priority support (renew $49/yr)

Enterprise

For organizations with compliance and governance needs.

Custom /contact sales
Contact Sales
  • Everything in Pro
  • SSO/OIDC (Okta, Azure AD, Keycloak)
  • RBAC (admin, operator, viewer)
  • Team workspaces
  • Usage analytics
  • Model governance
  • REST API
  • SLA + dedicated support

Up and running in 30 seconds

One file. One command. Your fleet manager is ready.

1 Create docker-compose.yml
services:
  app:
    build: .
    ports:
      - "${APP_PORT:-8000}:8000"
    volumes:
      - ollamahelm-storage:/app/storage
    environment:
      - APP_ENV=${APP_ENV:-production}
      - APP_DEBUG=${APP_DEBUG:-false}
      - APP_URL=${APP_URL:-http://localhost:${APP_PORT:-8000}}
      - OLLAMA_DEFAULT_HOST=${OLLAMA_DEFAULT_HOST:-http://host.docker.internal:11434}
    extra_hosts:
      - "host.docker.internal:host-gateway"
    restart: unless-stopped

volumes:
  ollamahelm-storage:
2 Start it up
docker compose up -d
3 Open your browser
http://localhost:8000

Frequently asked questions

Start managing your Ollama fleet in 30 seconds

Free for up to 3 servers. No credit card. No sign-up wall. Just Docker Compose up and go.