A llamafile is a self-contained binary that can run an open-source LLM. You can configure this provider in your config.json as follows:

config.yaml
models:
  - name: Llamafile
    provider: llamafile
    model: mistral-7b

View the source