Ollama (Local)

Ollama enables local LLM polishing and is suited for privacy-first or offline-friendly workflows.

Official Documentation

Prepare Local Runtime

  1. Install Ollama and confirm daemon is running.
  2. Pull at least one local model, for example: ollama pull llama3.2.
  3. Verify local endpoint is reachable before OpenTypeless test.

Configure in OpenTypeless

  1. Settings > AI Polish > Provider: Ollama.
  2. Base URL: http://localhost:11434/v1.
  3. Model: llama3.2 (or your local model).
  4. If API key field must be non-empty for test button, use a placeholder such as ollama-local.
  5. Run Test.

Recommended Configuration

OpenTypeless calls Ollama via OpenAI-compatible endpoint path.

Provider: ollama
Base URL: http://localhost:11434/v1
Endpoint: /chat/completions
Default model: llama3.2

Important

⚠️
Most failures are local runtime issues (daemon down or model missing), not prompt quality problems.