Skip to main content

Embeddings model

An "embeddings model" is trained to convert a piece of text into a vector, which can later be rapidly compared to other vectors to determine similarity between the pieces of text. Embeddings models are typically much smaller than LLMs, and will be extremely fast and cheap in comparison.

In Continue, embeddings are generated during indexing and then used by @codebase to perform similarity search over your codebase.

If you have the ability to use any model, we recommend voyage-code-2, which is listed below along with the rest of the options for embeddings models.

If you want to generate embeddings locally, we recommend using nomic-embed-text with Ollama.

Voyage AI

After obtaining an API key from here, you can configure like this:

config.json
{
"embeddingsProvider": {
"provider": "voyage",
"model": "voyage-code-2",
"apiKey": "<VOYAGE_API_KEY>"
}
}

Ollama

See here for instructions on how to use Ollama for embeddings.

Transformers.js (currently VS Code only)

Transformers.js is a JavaScript port of the popular Transformers library. It allows embeddings to be calculated entirely locally. The model used is all-MiniLM-L6-v2, which is shipped alongside the Continue extension and used as the default when you have not explicitly configured an embeddings provider.

config.json
{
"embeddingsProvider": {
"provider": "transformers.js"
}
}

Text Embeddings Inference

Hugging Face Text Embeddings Inference enables you to host your own embeddings endpoint. You can configure embeddings to use your endpoint as follows:

config.json
{
"embeddingsProvider": {
"provider": "huggingface-tei",
"apiBase": "http://localhost:8080"
}
}

OpenAI

See here for instructions on how to use OpenAI for embeddings.

Cohere

See here for instructions on how to use Cohere for embeddings.

Gemini

See here for instructions on how to use Gemini for embeddings.