# OpenAI Python API library
[![PyPI version](https://img.shields.io/pypi/v/openai.svg)](https://pypi.org/project/openai/)
The OpenAI Python library provides convenient access to the OpenAI REST API from any Python 3.7+
application. The library includes type definitions for all request params and response fields,
and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
It is generated from our [OpenAPI specification](https://github.com/openai/openai-openapi) with [Stainless](https://stainlessapi.com/).
## Documentation
The REST API documentation can be found [on platform.openai.com](https://platform.openai.com/docs). The full API of this library can be found in [api.md](api.md).
## Installation
> [!IMPORTANT]
> The SDK was rewritten in v1, which was released November 6th 2023. See the [v1 migration guide](https://github.com/openai/openai-python/discussions/742), which includes scripts to automatically update your code.
```sh
# install from PyPI
pip install openai
```
## Usage
The full API of this library can be found in [api.md](api.md).
```python
import os
from openai import OpenAI
client = OpenAI(
# This is the default and can be omitted
api_key=os.environ.get("OPENAI_API_KEY"),
)
chat_completion = client.chat.completions.create(
messages=[
{
"role": "user",
"content": "Say this is a test",
}
],
model="gpt-3.5-turbo",
)
```
While you can provide an `api_key` keyword argument,
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
to add `OPENAI_API_KEY="My API Key"` to your `.env` file
so that your API Key is not stored in source control.
### Polling Helpers
When interacting with the API some actions such as starting a Run may take time to complete. The SDK includes
helper functions which will poll the status until it reaches a terminal state and then return the resulting object.
If an API method results in an action which could benefit from polling there will be a corresponding version of the
method ending in '\_and_poll'.
For instance to create a Run and poll until it reaches a terminal state you can run:
```python
run = client.beta.threads.runs.create_and_poll(
thread_id=thread.id,
assistant_id=assistant.id,
)
```
More information on the lifecycle of a Run can be found in the [Run Lifecycle Documentation](https://platform.openai.com/docs/assistants/how-it-works/run-lifecycle)
### Streaming Helpers
The SDK also includes helpers to process streams and handle the incoming events.
```python
with client.beta.threads.runs.stream(
thread_id=thread.id,
assistant_id=assistant.id,
instructions="Please address the user as Jane Doe. The user has a premium account.",
) as stream:
for event in stream:
# Print the text from text delta events
if event.type == "thread.message.delta" and event.data.delta.content:
print(event.data.delta.content[0].text)
```
More information on streaming helpers can be found in the dedicated documentation: [helpers.md](helpers.md)
## Async usage
Simply import `AsyncOpenAI` instead of `OpenAI` and use `await` with each API call:
```python
import os
import asyncio
from openai import AsyncOpenAI
client = AsyncOpenAI(
# This is the default and can be omitted
api_key=os.environ.get("OPENAI_API_KEY"),
)
async def main() -> None:
chat_completion = await client.chat.completions.create(
messages=[
{
"role": "user",
"content": "Say this is a test",
}
],
model="gpt-3.5-turbo",
)
asyncio.run(main())
```
Functionality between the synchronous and asynchronous clients is otherwise identical.
## Streaming responses
We provide support for streaming responses using Server Side Events (SSE).
```python
from openai import OpenAI
client = OpenAI()
stream = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Say this is a test"}],
stream=True,
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")
```
The async client uses the exact same interface.
```python
from openai import AsyncOpenAI
client = AsyncOpenAI()
async def main():
stream = await client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Say this is a test"}],
stream=True,
)
async for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")
asyncio.run(main())
```
## Module-level client
> [!IMPORTANT]
> We highly recommend instantiating client instances instead of relying on the global client.
We also expose a global client instance that is accessible in a similar fashion to versions prior to v1.
```py
import openai
# optional; defaults to `os.environ['OPENAI_API_KEY']`
openai.api_key = '...'
# all client options can be configured just like the `OpenAI` instantiation counterpart
openai.base_url = "https://..."
openai.default_headers = {"x-foo": "true"}
completion = openai.chat.completions.create(
model="gpt-4",
messages=[
{
"role": "user",
"content": "How do I output all files in a directory using Python?",
},
],
)
print(completion.choices[0].message.content)
```
The API is the exact same as the standard client instance based API.
This is intended to be used within REPLs or notebooks for faster iteration, **not** in application code.
We recommend that you always instantiate a client (e.g., with `client = OpenAI()`) in application code because:
- It can be difficult to reason about where client options are configured
- It's not possible to change certain client options without potentially causing race conditions
- It's harder to mock for testing purposes
- It's not possible to control cleanup of network connections
## Using types
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
- Serializing back into JSON, `model.to_json()`
- Converting to a dictionary, `model.to_dict()`
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
## Pagination
List methods in the OpenAI API are paginated.
This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:
```python
import openai
client = OpenAI()
all_jobs = []
# Automatically fetches more pages as needed.
for job in client.fine_tuning.jobs.list(
limit=20,
):
# Do something with job here
all_jobs.append(job)
print(all_jobs)
```
Or, asynchronously:
```python
import asyncio
import openai
client = AsyncOpenAI()
async def main() -> None:
all_jobs = []
# Iterate through items across all pages, issuing requests as needed.
async for job in client.fine_tuning.jobs.list(
limit=20,
):
all_jobs.append(job)
print(all_jobs)
asyncio.run(main())
```
Alternatively, you can use the `.has_next_page()`, `.next_page_info()`, or `.get_next_page()` methods for more granular control working with pages:
```python
first_page = await client.fine_tuning.jobs.list(
limit=20,
)
if first_page.has_next_page():
print(f"will fetch next page using these details: {first_page.next_page_info()}")
next_page = await first_page.get_next_page()
print(f"number of items we just fetched: {len(next_page.data)}")
# Remove `await` for non-async usage.
```
Or just work directly with the returned data:
```python
first_page = await client.fine_tuning.jobs.list(
limit=20,
)
print(f"next page cursor: {first_page.after}") # => "next page cursor: ..."
for job in first_page.data:
print(job.id)
没有合适的资源?快使用搜索试试~ 我知道了~
openai-python-main.zip
共350个文件
py:316个
yml:8个
md:6个
0 下载量 121 浏览量
2024-04-17
10:38:07
上传
评论
收藏 398KB ZIP 举报
温馨提示
OpenAI Python 库为任何 Python 3.7 及以上版本的应用程序提供了便捷的方式来访问 OpenAI REST API。该库包含了所有请求参数和响应字段的类型定义,并且利用 httpx 提供了同步和异步两种客户端。
资源推荐
资源详情
资源评论
收起资源包目录
openai-python-main.zip (350个子文件)
check-release-environment 658B
check-test-server 1KB
CODEOWNERS 20B
Dockerfile 307B
.gitignore 82B
mypy.ini 1KB
release-please-config.json 1KB
devcontainer.json 1KB
.release-please-manifest.json 19B
.keep 239B
.keep 224B
LICENSE 11KB
requirements-dev.lock 3KB
requirements.lock 1KB
CHANGELOG.md 45KB
api.md 20KB
README.md 19KB
helpers.md 7KB
CONTRIBUTING.md 3KB
pull_request_template.md 496B
publish-pypi 100B
runs.py 138KB
threads.py 87KB
completions.py 65KB
_base_client.py 64KB
test_client.py 57KB
completions.py 56KB
_assistants.py 40KB
test_runs.py 38KB
_validators.py 34KB
assistants.py 31KB
_response.py 28KB
jobs.py 26KB
_models.py 25KB
files.py 25KB
test_threads.py 24KB
images.py 24KB
test_models.py 23KB
messages.py 23KB
_client.py 21KB
azure.py 21KB
test_jobs.py 19KB
files.py 19KB
test_files.py 19KB
test_assistants.py 17KB
test_messages.py 16KB
test_completions.py 15KB
_legacy_response.py 15KB
test_files.py 15KB
test_transform.py 13KB
_streaming.py 13KB
_transform.py 13KB
batches.py 13KB
files.py 12KB
steps.py 12KB
test_images.py 11KB
_utils.py 11KB
transcriptions.py 11KB
embeddings.py 10KB
test_files.py 10KB
models.py 10KB
__init__.py 10KB
completion_create_params.py 10KB
test_steps.py 10KB
test_batches.py 10KB
translations.py 9KB
test_models.py 9KB
test_completions.py 9KB
thread_create_and_run_params.py 8KB
test_streaming.py 8KB
run_create_params.py 8KB
speech.py 8KB
run.py 7KB
completion_create_params.py 7KB
_cli.py 7KB
moderations.py 6KB
completions.py 6KB
checkpoints.py 6KB
_compat.py 6KB
assistant_stream_event.py 6KB
_types.py 6KB
test_speech.py 6KB
test_response.py 6KB
test_module_client.py 5KB
completions.py 5KB
ruffen-docs.py 5KB
image.py 5KB
migrate.py 5KB
test_checkpoints.py 5KB
_qs.py 5KB
test_embeddings.py 4KB
audio.py 4KB
test_transcriptions.py 4KB
job_create_params.py 4KB
test_translations.py 4KB
__init__.py 4KB
chat_completion_chunk.py 4KB
utils.py 4KB
moderation.py 4KB
test_moderations.py 4KB
共 350 条
- 1
- 2
- 3
- 4
资源评论
有也空空
- 粉丝: 2084
- 资源: 116
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- 汽车锁(世界锁)全自动检测设备机械设计结构设计图纸和其它技术资料和技术方案非常好100%好用.zip
- Docker & Docker-Compose资源获取下载.zip
- 基于HTML、Java、JavaScript、CSS的Flowermall线上花卉商城设计源码
- 基于SSM框架和微信小程序的订餐管理系统点餐功能源码
- 基于freeRTOS和STM32F103x的手机远程控制浴室温度系统设计源码
- 基于Java语言的经典设计模式源码解析与应用
- 桥墩冲刷实验水槽工程图机械结构设计图纸和其它技术资料和技术方案非常好100%好用.zip
- 基于物联网与可视化技术的ECIOT集成设计源码
- 基于Vue和微信小程序的JavaScript广告投放demo设计源码
- 基于layui框架的省市复选框组件设计源码
- 基于HTML、CSS、Python技术的学生先群网(asgnet.cn, efsdw.cn)设计源码
- 基于Vue、TypeScript、CSS、HTML的vite_project废弃Vue项目设计源码
- 基于微信小程序的童书租借系统设计源码
- 基于Python和JavaScript的车辆牌照识别系统设计源码
- 基于Spring Boot和Vue的校园健康管理系统设计源码
- 基于Python的滑动验证码设计源码下载
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功