OpenAI
info
You can get an API key from the OpenAI console
Chat model
We recommend configuring GPT-4o as your chat model.
- YAML
- JSON
config.yaml
models:
- name: GPT-4o
provider: openai
model: gpt-4o
apiKey: <YOUR_HUGGINGFACE_TEI_API_KEY>
config.json
{
"models": [
{
"title": "GPT-4o",
"provider": "openai",
"model": "gpt-4o",
"apiKey": "<YOUR_HUGGINGFACE_TEI_API_KEY>"
}
]
}
Autocomplete model
OpenAI currently does not offer any autocomplete models.
Click here to see a list of autocomplete model providers.
Embeddings model
We recommend configuring text-embedding-3-large as your embeddings model.
- YAML
- JSON
config.yaml
models:
- name: OpenAI Embeddings
provider: openai
model: text-embedding-3-large
apiKey: <YOUR_HUGGINGFACE_TEI_API_KEY>
roles:
- embed
config.json
{
"embeddingsProvider": {
"provider": "openai",
"model": "text-embedding-3-large",
"apiKey": "<YOUR_HUGGINGFACE_TEI_API_KEY>"
}
}
Reranking model
OpenAI currently does not offer any reranking models.
Click here to see a list of reranking model providers.
OpenAI compatible servers / APIs
OpenAI compatible servers
OpenAI compatible APIs
If you are using an OpenAI compatible server / API, you can change the apiBase
like this:
- YAML
- JSON
config.yaml
models:
- name: OpenAI-compatible server / API
provider: openai
model: MODEL_NAME
apiBase: http://localhost:8000/v1
apiKey: <YOUR_CUSTOM_API_KEY>
config.json
{
"models": [
{
"title": "OpenAI-compatible server / API",
"provider": "openai",
"model": "MODEL_NAME",
"apiKey": "<YOUR_CUSTOM_API_KEY>",
"apiBase": "http://localhost:8000/v1"
}
]
}
To force usage of chat/completions
instead of completions
endpoint you can set:
- YAML
- JSON
config.yaml
models:
- name: OpenAI-compatible server / API
provider: openai
model: MODEL_NAME
apiBase: http://localhost:8000/v1
useLegacyCompletionsEndpoint: true
config.json
{
"models": [
{
"title": "OpenAI-compatible server / API",
"provider": "openai",
"model": "MODEL_NAME",
"apiBase": "http://localhost:8000/v1",
"useLegacyCompletionsEndpoint": true
}
]
}