Ollama (Local)
Ollama enables local LLM polishing and is suited for privacy-first or offline-friendly workflows.
Official Documentation
Prepare Local Runtime
- Install Ollama and confirm daemon is running.
- Pull at least one local model, for example: ollama pull llama3.2.
- Verify local endpoint is reachable before OpenTypeless test.
Configure in OpenTypeless
- Settings > AI Polish > Provider: Ollama.
- Base URL: http://localhost:11434/v1.
- Model: llama3.2 (or your local model).
- If API key field must be non-empty for test button, use a placeholder such as ollama-local.
- Run Test.
Recommended Configuration
OpenTypeless calls Ollama via OpenAI-compatible endpoint path.
Provider: ollama
Base URL: http://localhost:11434/v1
Endpoint: /chat/completions
Default model: llama3.2Important
Most failures are local runtime issues (daemon down or model missing), not prompt quality problems.