Configure Ollama with Continue to run powerful open source models locally including Llama3.1 for chat, Qwen2.5-Coder for autocomplete, and Nomic Embed Text for embeddings
Ollama is an open-source tool that allows to run large language models (LLMs) locally on their own computers. To use Ollama, you can install it here and download the model you want to run with the ollama run command.
Ollama models usually have their capabilities auto-detected correctly. However, if you’re using custom model names or experiencing issues with tools/images not working, you can explicitly set capabilities:
config.yaml
Copy
Ask AI
models: - name: Custom Vision Model provider: ollama model: my-custom-llava capabilities: - tool_use # Enable if your model supports function calling - image_input # Enable for vision models like llava
Most standard Ollama models (like llama3.1, mistral, etc.) support tool use by
default. Vision models (like llava) also support image input.