Skip to main content
Get started with Llama.cpp

Configuration

config.yaml
name: My Config
version: 0.0.1
schema: v1

models:
  - name: <MODEL_NAME>
    provider: llama.cpp
    model: <MODEL_ID>
    apiBase: http://localhost:8080