A llamafile is a self-contained binary that can run an open-source LLM. You can configure this provider in your config.json as follows:
models: - name: Llamafile provider: llamafile model: mistral-7b
View the source
Was this page helpful?