# Vercel AI SDK - OpenAI Provider
The [OpenAI](https://platform.openai.com/) provider for the [Vercel AI SDK](https://sdk.vercel.ai/docs) contains language model support for the OpenAI chat and completion APIs.
It creates language model objects that can be used with the `generateText`, `streamText`, `generateObject`, and `streamObject` AI functions.
## Setup
The OpenAI provider is available in the `@ai-sdk/openai` module. You can install it with
```bash
npm i @ai-sdk/openai
```
## Provider Instance
You can import the default provider instance `openai` from `@ai-sdk/openai`:
```ts
import { openai } from '@ai-sdk/openai';
```
If you need a customized setup, you can import `createOpenAI` from `@ai-sdk/openai` and create a provider instance with your settings:
```ts
import { createOpenAI } from '@ai-sdk/openai';
const openai = createOpenAI({
// custom settings
});
```
You can use the following optional settings to customize the OpenAI provider instance:
- **baseURL** _string_
Use a different URL prefix for API calls, e.g. to use proxy servers.
The default prefix is `https://api.openai.com/v1`.
- **apiKey** _string_
API key that is being send using the `Authorization` header.
It defaults to the `OPENAI_API_KEY` environment variable.
- **organization** _string_
OpenAI Organization.
- **project** _string_
OpenAI project.
- **headers** _Record<string,string>_
Custom headers to include in the requests.
## Models
The OpenAI provider instance is a function that you can invoke to create a model:
```ts
const model = openai('gpt-3.5-turbo');
```
It automatically selects the correct API based on the model id.
You can also pass additional settings in the second argument:
```ts
const model = openai('gpt-3.5-turbo', {
// additional settings
});
```
The available options depend on the API that's automatically chosen for the model (see below).
If you want to explicitly select a specific model API, you can use `.chat` or `.completion`.
### Chat Models
You can create models that call the [OpenAI chat API](https://platform.openai.com/docs/api-reference/chat) using the `.chat()` factory method.
The first argument is the model id, e.g. `gpt-4`.
The OpenAI chat models support tool calls and some have multi-modal capabilities.
```ts
const model = openai.chat('gpt-3.5-turbo');
```
OpenAI chat models support also some model specific settings that are not part of the [standard call settings](/docs/ai-core/settings).
You can pass them as an options argument:
```ts
const model = openai.chat('gpt-3.5-turbo', {
logitBias: {
// optional likelihood for specific tokens
'50256': -100,
},
user: 'test-user', // optional unique user identifier
});
```
The following optional settings are available for OpenAI chat models:
- **logitBias** _Record<number, number>_
Modifies the likelihood of specified tokens appearing in the completion.
Accepts a JSON object that maps tokens (specified by their token ID in
the GPT tokenizer) to an associated bias value from -100 to 100. You
can use this tokenizer tool to convert text to token IDs. Mathematically,
the bias is added to the logits generated by the model prior to sampling.
The exact effect will vary per model, but values between -1 and 1 should
decrease or increase likelihood of selection; values like -100 or 100
should result in a ban or exclusive selection of the relevant token.
As an example, you can pass {"50256": -100} to prevent the <|endoftext|>
token from being generated.
- **logProbs** _boolean | number_
Return the log probabilities of the tokens. Including logprobs will increase
the response size and can slow down response times. However, it can
be useful to better understand how the model is behaving.
Setting to true will return the log probabilities of the tokens that
were generated.
Setting to a number will return the log probabilities of the top n
tokens that were generated.
- **user** _string_
A unique identifier representing your end-user, which can help OpenAI to
monitor and detect abuse. Learn more.
### Completion Models
You can create models that call the [OpenAI completions API](https://platform.openai.com/docs/api-reference/completions) using the `.completion()` factory method.
The first argument is the model id.
Currently only `gpt-3.5-turbo-instruct` is supported.
```ts
const model = openai.completion('gpt-3.5-turbo-instruct');
```
OpenAI completion models support also some model specific settings that are not part of the [standard call settings](/docs/ai-core/settings).
You can pass them as an options argument:
```ts
const model = openai.completion('gpt-3.5-turbo-instruct', {
echo: true, // optional, echo the prompt in addition to the completion
logitBias: {
// optional likelihood for specific tokens
'50256': -100,
},
suffix: 'some text', // optional suffix that comes after a completion of inserted text
user: 'test-user', // optional unique user identifier
});
```
The following optional settings are available for OpenAI completion models:
- **echo**: _boolean_
Echo back the prompt in addition to the completion.
- **logitBias** _Record<number, number>_
Modifies the likelihood of specified tokens appearing in the completion.
Accepts a JSON object that maps tokens (specified by their token ID in
the GPT tokenizer) to an associated bias value from -100 to 100. You
can use this tokenizer tool to convert text to token IDs. Mathematically,
the bias is added to the logits generated by the model prior to sampling.
The exact effect will vary per model, but values between -1 and 1 should
decrease or increase likelihood of selection; values like -100 or 100
should result in a ban or exclusive selection of the relevant token.
As an example, you can pass {"50256": -100} to prevent the <|endoftext|>
token from being generated.
- **logProbs** _boolean | number_
Return the log probabilities of the tokens. Including logprobs will increase
the response size and can slow down response times. However, it can
be useful to better understand how the model is behaving.
Setting to true will return the log probabilities of the tokens that
were generated.
Setting to a number will return the log probabilities of the top n
tokens that were generated.
- **suffix** _string_
The suffix that comes after a completion of inserted text.
- **user** _string_
A unique identifier representing your end-user, which can help OpenAI to
monitor and detect abuse. Learn more.