config.yaml Reference
Introduction
Continue hub assistants are defined using the config.yaml
specification. Assistants can be loaded from the Hub or locally
-
Continue Hub - YAML is stored on the hub and automatically synced to the extension
-
Locally
- in your global
.continue
folder (~/.continue
on Mac,%USERPROFILE%\.continue
) within.continue/assistants
. The name of the file will be used as the display name of the assistant, e.g.My Assistant.yaml
- in your workspace in a
/.continue/assistants
folder, with the same naming convention
- in your global
Config YAML replaces config.json
, which is deprecated. View the Migration
Guide.
An assistant is made up of:
- Top level properties, which specify the
name
,version
, andconfig.yaml
schema
for the assistant - Block lists, which are composable arrays of coding assistant building blocks available to the assistant, such as models, docs, and context providers.
A block is a single standalone building block of a coding assistants, e.g., one model or one documentation source. In config.yaml
syntax, a block consists of the same top-level properties as assistants (name
, version
, and schema
), but only has ONE item under whichever block type it is.
Examples of blocks and assistants can be found on the Continue hub.
Assistants can either explicitly define blocks - see Properties below - or import and configure existing hub blocks.
Using Blocks
Hub blocks and assistants are identified with a slug in the format owner-slug/block-or-assistant-slug
, where an owner can be a user or organization (For example, if you want to use the OpenAI 4o Model block, your slug would be openai/gpt-4o
). These blocks are pulled from https://hub.continue.dev.
Blocks can be imported into an assistant by adding a uses
clause under the block type. This can be alongside other uses
clauses or explicit blocks of that type.
For example, the following assistant imports an Anthropic model and defines an Ollama DeepSeek one.
Assistant models section
Local Blocks
It is also possible to define blocks locally in a .continue
folder. This folder can be located at either the root of your workspace (these will automatically be applied to all assistants when you are in that workspace) or in your home directory at ~/.continue
(these will automatically be applied globally).
Place your YAML files in the following folders:
Assistants:
.continue/assistants
- for assistants
Blocks:
.continue/rules
- for rules.continue/models
- for models.continue/prompts
- for prompts.continue/context
- for context providers.continue/docs
- for docs.continue/data
- for data.continue/mcpServers
- for MCP Servers
You can find many examples of each of these block types on the Continue Explore Page
Local blocks utilizing mustache notation for secrets (${{ secrets.SECRET_NAME }}
) can read secret values:
- globally, from a
.env
located in the global.continue
folder (~/.continue/.env
) - per-workspace, from a
.env
file located at the root of the current workspace.
Inputs
Blocks can be passed user inputs, including hub secrets and raw text values. To create a block that has an input, use mustache templating as follows:
Block config.yaml
Which can then be imported like
Assistant config.yaml
Note that hub secrets can be passed as inputs, using a similar mustache format: secrets.SECRET_NAME
.
Overrides
Block properties can be also be directly overriden using override
. For example:
Assistant config.yaml
Properties
Below are details for each property that can be set in config.yaml
.
All properties at all levels are optional unless explicitly marked as required.
The top-level properties in the config.yaml
configuration file are:
name
(required)version
(required)schema
(required)models
context
rules
prompts
docs
mcpServers
data
name
The name
property specifies the name of your project or configuration.
config.yaml
version
The version
property specifies the version of your project or configuration.
schema
The schema
property specifies the schema version used for the config.yaml
, e.g. v1
models
The models
section defines the language models used in your configuration. Models are used for functionalities such as chat, editing, and summarizing.
Properties:
-
name
(required): A unique name to identify the model within your configuration. -
provider
(required): The provider of the model (e.g.,openai
,ollama
). -
model
(required): The specific model name (e.g.,gpt-4
,starcoder
). -
apiBase
: Can be used to override the default API base that is specified per model -
roles
: An array specifying the roles this model can fulfill, such aschat
,autocomplete
,embed
,rerank
,edit
,apply
,summarize
. The default value is[chat, edit, apply, summarize]
. Note that thesummarize
role is not currently used. -
capabilities
: Array of strings denoting model capabilities, which will overwrite Continue’s autodetection based on provider and model. Supported capabilities includetool_use
andimage_input
. -
maxStopWords
: Maximum number of stop words allowed, to avoid API errors with extensive lists. -
promptTemplates
: Can be used to override the default prompt templates for different model roles. Valid values arechat
,edit
,apply
andautocomplete
. Thechat
property must be a valid template name, such asllama3
oranthropic
. -
chatOptions
: If the model includes rolechat
, these settings apply for Chat and Agent mode:baseSystemMessage
: Can be used to override the default system prompt for Chat mode.
-
embedOptions
: If the model includes roleembed
, these settings apply for embeddings:maxChunkSize
: Maximum tokens per document chunk. Minimum is 128 tokens.maxBatchSize
: Maximum number of chunks per request. Minimum is 1 chunk.
-
defaultCompletionOptions
: Default completion options for model settings.contextLength
: Maximum context length of the model, typically in tokens.maxTokens
: Maximum number of tokens to generate in a completion.temperature
: Controls the randomness of the completion. Values range from0.0
(deterministic) to1.0
(random).topP
: The cumulative probability for nucleus sampling.topK
: Maximum number of tokens considered at each step.stop
: An array of stop tokens that will terminate the completion.reasoning
: Boolean to enable thinking/reasoning for Anthropic Claude 3.7+ models.reasoningBudgetTokens
: Budget tokens for thinking/reasoning in Anthropic Claude 3.7+ models.
-
requestOptions
: HTTP request options specific to the model.-
timeout
: Timeout for each request to the language model. -
verifySsl
: Whether to verify SSL certificates for requests. -
caBundlePath
: Path to a custom CA bundle for HTTP requests. -
proxy
: Proxy URL for HTTP requests. -
headers
: Custom headers for HTTP requests. -
extraBodyProperties
: Additional properties to merge with the HTTP request body. -
noProxy
: List of hostnames that should bypass the specified proxy. -
clientCertificate
: Client certificate for HTTP requests.cert
: Path to the client certificate file.key
: Path to the client certificate key file.passphrase
: Optional passphrase for the client certificate key file.
-
Example
config.yaml
context
The context
section defines context providers, which supply additional information or context to the language models. Each context provider can be configured with specific parameters.
More information about usage/params for each context provider can be found here
Properties:
provider
(required): The identifier or name of the context provider (e.g.,code
,docs
,web
)name
: Optional name for the providerparams
: Optional parameters to configure the context provider’s behavior.
Example:
config.yaml
rules
List of rules that the LLM should follow. These are concatenated into the system message for all Chat, Edit, and Agent requests. See the rules deep dive for details.
Explicit rules can either be simple text or an object with the following properties:
name
(required): A display name/title for the rulerule
(required): The text content of the ruleglobs
(optional): When files are provided as context that match this glob pattern, the rule will be included. This can be either a single pattern (e.g.,"**/*.{ts,tsx}"
) or an array of patterns (e.g.,["src/**/*.ts", "tests/**/*.ts"]
).
config.yaml
prompts
A list of custom prompts that can be invoked from the chat window. Each prompt has a name, description, and the actual prompt text.
config.yaml
docs
List of documentation sites to index.
Properties:
-
name
(required): Name of the documentation site, displayed in dropdowns, etc. -
startUrl
(required): Start page for crawling - usually root or intro page for docs -
maxDepth
: Maximum link depth for crawling. Default4
-
favicon
: URL for site favicon (default is/favicon.ico
fromstartUrl
). -
useLocalCrawling
: Skip the default crawler and only crawl using a local crawler.
Example
config.yaml
mcpServers
The Model Context Protocol is a standard proposed by Anthropic to unify prompts, context, and tool use. Continue supports any MCP server with the MCP context provider.
Properties:
name
(required): The name of the MCP server.command
(required): The command used to start the server.args
: An optional array of arguments for the command.env
: An optional map of environment variables for the server process.cwd
: An optional working directory to run the command in. Can be absolute or relative path.connectionTimeout
: An optional connection timeout number to the server in milliseconds.
Example:
config.yaml
data
Destinations to which development data will be sent.
Properties:
-
name
(required): The display name of the data destination -
destination
(required): The destination/endpoint that will receive the data. Can be:- an HTTP endpoint that will receive a POST request with a JSON blob
- a file URL to a directory in which events will be dumpted to
.jsonl
files
-
schema
(required): the schema version of the JSON blobs to be sent. Options include0.1.0
and0.2.0
-
events
: an array of event names to include. Defaults to all events if not specified. -
level
: a pre-defined filter for event fields. Options includeall
andnoCode
; the latter excludes data like file contents, prompts, and completions. Defaults toall
-
apiKey
: api key to be sent with request (Bearer header) -
requestOptions
: Options for event POST requests. Same format as model requestOptions.Example:
config.yaml
Complete YAML Config Example
Putting it all together, here’s a complete example of a config.yaml
configuration file:
config.yaml
Using YAML anchors to avoid config duplication
You can also use node anchors to avoid duplication of properties. To do so, adding the YAML version header %YAML 1.1
is needed, here’s an example of a config.yaml
configuration file using anchors:
config.yaml