Continue
Continue
DocsBlogSign in
Getting Started
InstallQuick StartCustomization Overview
Features
Customize
Customization OverviewModelsMCP serversRulesPrompts
Model Providers Overview
Ask SageDeepSeekDeepInfraGroqLlama.cppLlama StackXiaomi MimoMistralNous ResearchNVIDIATogether AIxAI
Telemetry
Reference
config.yaml ReferenceMigrating Config to YAMLContinue Documentation MCP Serverconfig.json Reference (Deprecated)Context Providers (Deprecated)@Codebase (Deprecated)@Docs (Deprecated)
Guides
How to Understand Hub vs Local ConfigurationConfiguring Models, Rules, and ToolsCodebase and Documentation AwarenessUsing Plan Mode with ContinueUsing Ollama with Continue: A Developer's GuideUsing Instinct with Ollama in ContinueHow to Run Continue Without InternetHow to Build Custom Code RAGHow to Self-Host a Model
Help
FAQsTroubleshootingDocs Contributions
Continue Hub (deprecated)

Llama.cpp

Get started with Llama.cpp

Configuration

name: My Config
version: 0.0.1
schema: v1

models:
  - name: <MODEL_NAME>
    provider: llama.cpp
    model: <MODEL_ID>
    apiBase: http://localhost:8080
GroqLlama Stack

On this page

Configuration