%global _empty_manifest_terminate_build 0 Name: python-openai Version: 0.27.4 Release: 1 Summary: Python client library for the OpenAI API License: MIT License URL: https://github.com/openai/openai-python Source0: https://mirrors.nju.edu.cn/pypi/web/packages/d6/f0/e80cef4ff77f100ebb70b51a9a43a3dcd989a938a7c6ba299d987fd867e9/openai-0.27.4.tar.gz BuildArch: noarch Requires: python3-requests Requires: python3-tqdm Requires: python3-aiohttp Requires: python3-typing-extensions Requires: python3-numpy Requires: python3-pandas Requires: python3-pandas-stubs Requires: python3-openpyxl Requires: python3-black Requires: python3-pytest Requires: python3-pytest-asyncio Requires: python3-pytest-mock Requires: python3-scikit-learn Requires: python3-tenacity Requires: python3-matplotlib Requires: python3-plotly Requires: python3-numpy Requires: python3-scipy Requires: python3-pandas Requires: python3-pandas-stubs Requires: python3-openpyxl Requires: python3-wandb Requires: python3-numpy Requires: python3-pandas Requires: python3-pandas-stubs Requires: python3-openpyxl %description # OpenAI Python Library The OpenAI Python library provides convenient access to the OpenAI API from applications written in the Python language. It includes a pre-defined set of classes for API resources that initialize themselves dynamically from API responses which makes it compatible with a wide range of versions of the OpenAI API. You can find usage examples for the OpenAI Python library in our [API reference](https://beta.openai.com/docs/api-reference?lang=python) and the [OpenAI Cookbook](https://github.com/openai/openai-cookbook/). ## Installation You don't need this source code unless you want to modify the package. If you just want to use the package, just run: ```sh pip install --upgrade openai ``` Install from source with: ```sh python setup.py install ``` ### Optional dependencies Install dependencies for [`openai.embeddings_utils`](openai/embeddings_utils.py): ```sh pip install openai[embeddings] ``` Install support for [Weights & Biases](https://wandb.me/openai-docs): ``` pip install openai[wandb] ``` Data libraries like `numpy` and `pandas` are not installed by default due to their size. They’re needed for some functionality of this library, but generally not for talking to the API. If you encounter a `MissingDependencyError`, install them with: ```sh pip install openai[datalib] ```` ## Usage The library needs to be configured with your account's secret key which is available on the [website](https://platform.openai.com/account/api-keys). Either set it as the `OPENAI_API_KEY` environment variable before using the library: ```bash export OPENAI_API_KEY='sk-...' ``` Or set `openai.api_key` to its value: ```python import openai openai.api_key = "sk-..." # list models models = openai.Model.list() # print the first model's id print(models.data[0].id) # create a completion completion = openai.Completion.create(model="ada", prompt="Hello world") # print the completion print(completion.choices[0].text) ``` ### Params All endpoints have a `.create` method that supports a `request_timeout` param. This param takes a `Union[float, Tuple[float, float]]` and will raise an `openai.error.Timeout` error if the request exceeds that time in seconds (See: https://requests.readthedocs.io/en/latest/user/quickstart/#timeouts). ### Microsoft Azure Endpoints In order to use the library with Microsoft Azure endpoints, you need to set the `api_type`, `api_base` and `api_version` in addition to the `api_key`. The `api_type` must be set to 'azure' and the others correspond to the properties of your endpoint. In addition, the deployment name must be passed as the engine parameter. ```python import openai openai.api_type = "azure" openai.api_key = "..." openai.api_base = "https://example-endpoint.openai.azure.com" openai.api_version = "2023-03-15-preview" # create a completion completion = openai.Completion.create(deployment_id="deployment-name", prompt="Hello world") # print the completion print(completion.choices[0].text) ``` Please note that for the moment, the Microsoft Azure endpoints can only be used for completion, embedding, and fine-tuning operations. For a detailed example of how to use fine-tuning and other operations using Azure endpoints, please check out the following Jupyter notebooks: * [Using Azure completions](https://github.com/openai/openai-cookbook/tree/main/examples/azure/completions.ipynb) * [Using Azure fine-tuning](https://github.com/openai/openai-cookbook/tree/main/examples/azure/finetuning.ipynb) * [Using Azure embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/azure/embeddings.ipynb) ### Microsoft Azure Active Directory Authentication In order to use Microsoft Active Directory to authenticate to your Azure endpoint, you need to set the `api_type` to "azure_ad" and pass the acquired credential token to `api_key`. The rest of the parameters need to be set as specified in the previous section. ```python from azure.identity import DefaultAzureCredential import openai # Request credential default_credential = DefaultAzureCredential() token = default_credential.get_token("https://cognitiveservices.azure.com/.default") # Setup parameters openai.api_type = "azure_ad" openai.api_key = token.token openai.api_base = "https://example-endpoint.openai.azure.com/" openai.api_version = "2023-03-15-preview" # ... ``` ### Command-line interface This library additionally provides an `openai` command-line utility which makes it easy to interact with the API from your terminal. Run `openai api -h` for usage. ```sh # list models openai api models.list # create a completion openai api completions.create -m ada -p "Hello world" # create a chat completion openai api chat_completions.create -m gpt-3.5-turbo -g user "Hello world" # generate images via DALL·E API openai api image.create -p "two dogs playing chess, cartoon" -n 1 ``` ## Example code Examples of how to use this Python library to accomplish various tasks can be found in the [OpenAI Cookbook](https://github.com/openai/openai-cookbook/). It contains code examples for: * Classification using fine-tuning * Clustering * Code search * Customizing embeddings * Question answering from a corpus of documents * Recommendations * Visualization of embeddings * And more Prior to July 2022, this OpenAI Python library hosted code examples in its examples folder, but since then all examples have been migrated to the [OpenAI Cookbook](https://github.com/openai/openai-cookbook/). ### Chat Conversational models such as `gpt-3.5-turbo` can be called using the chat completions endpoint. ```python import openai openai.api_key = "sk-..." # supply your API key however you choose completion = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hello world!"}]) print(completion.choices[0].message.content) ``` ### Embeddings In the OpenAI Python library, an embedding represents a text string as a fixed-length vector of floating point numbers. Embeddings are designed to measure the similarity or relevance between text strings. To get an embedding for a text string, you can use the embeddings method as follows in Python: ```python import openai openai.api_key = "sk-..." # supply your API key however you choose # choose text to embed text_string = "sample text" # choose an embedding model_id = "text-similarity-davinci-001" # compute the embedding of the text embedding = openai.Embedding.create(input=text_string, model=model_id)['data'][0]['embedding'] ``` An example of how to call the embeddings method is shown in this [get embeddings notebook](https://github.com/openai/openai-cookbook/blob/main/examples/Get_embeddings.ipynb). Examples of how to use embeddings are shared in the following Jupyter notebooks: - [Classification using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Classification_using_embeddings.ipynb) - [Clustering using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Clustering.ipynb) - [Code search using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Code_search.ipynb) - [Semantic text search using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Semantic_text_search_using_embeddings.ipynb) - [User and product embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/User_and_product_embeddings.ipynb) - [Zero-shot classification using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Zero-shot_classification_with_embeddings.ipynb) - [Recommendation using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Recommendation_using_embeddings.ipynb) For more information on embeddings and the types of embeddings OpenAI offers, read the [embeddings guide](https://beta.openai.com/docs/guides/embeddings) in the OpenAI documentation. ### Fine-tuning Fine-tuning a model on training data can both improve the results (by giving the model more examples to learn from) and reduce the cost/latency of API calls (chiefly through reducing the need to include training examples in prompts). Examples of fine-tuning are shared in the following Jupyter notebooks: - [Classification with fine-tuning](https://github.com/openai/openai-cookbook/blob/main/examples/Fine-tuned_classification.ipynb) (a simple notebook that shows the steps required for fine-tuning) - Fine-tuning a model that answers questions about the 2020 Olympics - [Step 1: Collecting data](https://github.com/openai/openai-cookbook/blob/main/examples/fine-tuned_qa/olympics-1-collect-data.ipynb) - [Step 2: Creating a synthetic Q&A dataset](https://github.com/openai/openai-cookbook/blob/main/examples/fine-tuned_qa/olympics-2-create-qa.ipynb) - [Step 3: Train a fine-tuning model specialized for Q&A](https://github.com/openai/openai-cookbook/blob/main/examples/fine-tuned_qa/olympics-3-train-qa.ipynb) Sync your fine-tunes to [Weights & Biases](https://wandb.me/openai-docs) to track experiments, models, and datasets in your central dashboard with: ```bash openai wandb sync ``` For more information on fine-tuning, read the [fine-tuning guide](https://beta.openai.com/docs/guides/fine-tuning) in the OpenAI documentation. ### Moderation OpenAI provides a Moderation endpoint that can be used to check whether content complies with the OpenAI [content policy](https://platform.openai.com/docs/usage-policies) ```python import openai openai.api_key = "sk-..." # supply your API key however you choose moderation_resp = openai.Moderation.create(input="Here is some perfectly innocuous text that follows all OpenAI content policies.") ``` See the [moderation guide](https://platform.openai.com/docs/guides/moderation) for more details. ## Image generation (DALL·E) ```python import openai openai.api_key = "sk-..." # supply your API key however you choose image_resp = openai.Image.create(prompt="two dogs playing chess, oil painting", n=4, size="512x512") ``` ## Audio transcription (Whisper) ```python import openai openai.api_key = "sk-..." # supply your API key however you choose f = open("path/to/file.mp3", "rb") transcript = openai.Audio.transcribe("whisper-1", f) ``` ## Async API Async support is available in the API by prepending `a` to a network-bound method: ```python import openai openai.api_key = "sk-..." # supply your API key however you choose async def create_completion(): completion_resp = await openai.Completion.acreate(prompt="This is a test", model="davinci") ``` To make async requests more efficient, you can pass in your own ``aiohttp.ClientSession``, but you must manually close the client session at the end of your program/event loop: ```python import openai from aiohttp import ClientSession openai.aiosession.set(ClientSession()) # At the end of your program, close the http session await openai.aiosession.get().close() ``` See the [usage guide](https://platform.openai.com/docs/guides/images) for more details. ## Requirements - Python 3.7.1+ In general, we want to support the versions of Python that our customers are using. If you run into problems with any version issues, please let us know on our [support page](https://help.openai.com/en/). ## Credit This library is forked from the [Stripe Python Library](https://github.com/stripe/stripe-python). %package -n python3-openai Summary: Python client library for the OpenAI API Provides: python-openai BuildRequires: python3-devel BuildRequires: python3-setuptools BuildRequires: python3-pip %description -n python3-openai # OpenAI Python Library The OpenAI Python library provides convenient access to the OpenAI API from applications written in the Python language. It includes a pre-defined set of classes for API resources that initialize themselves dynamically from API responses which makes it compatible with a wide range of versions of the OpenAI API. You can find usage examples for the OpenAI Python library in our [API reference](https://beta.openai.com/docs/api-reference?lang=python) and the [OpenAI Cookbook](https://github.com/openai/openai-cookbook/). ## Installation You don't need this source code unless you want to modify the package. If you just want to use the package, just run: ```sh pip install --upgrade openai ``` Install from source with: ```sh python setup.py install ``` ### Optional dependencies Install dependencies for [`openai.embeddings_utils`](openai/embeddings_utils.py): ```sh pip install openai[embeddings] ``` Install support for [Weights & Biases](https://wandb.me/openai-docs): ``` pip install openai[wandb] ``` Data libraries like `numpy` and `pandas` are not installed by default due to their size. They’re needed for some functionality of this library, but generally not for talking to the API. If you encounter a `MissingDependencyError`, install them with: ```sh pip install openai[datalib] ```` ## Usage The library needs to be configured with your account's secret key which is available on the [website](https://platform.openai.com/account/api-keys). Either set it as the `OPENAI_API_KEY` environment variable before using the library: ```bash export OPENAI_API_KEY='sk-...' ``` Or set `openai.api_key` to its value: ```python import openai openai.api_key = "sk-..." # list models models = openai.Model.list() # print the first model's id print(models.data[0].id) # create a completion completion = openai.Completion.create(model="ada", prompt="Hello world") # print the completion print(completion.choices[0].text) ``` ### Params All endpoints have a `.create` method that supports a `request_timeout` param. This param takes a `Union[float, Tuple[float, float]]` and will raise an `openai.error.Timeout` error if the request exceeds that time in seconds (See: https://requests.readthedocs.io/en/latest/user/quickstart/#timeouts). ### Microsoft Azure Endpoints In order to use the library with Microsoft Azure endpoints, you need to set the `api_type`, `api_base` and `api_version` in addition to the `api_key`. The `api_type` must be set to 'azure' and the others correspond to the properties of your endpoint. In addition, the deployment name must be passed as the engine parameter. ```python import openai openai.api_type = "azure" openai.api_key = "..." openai.api_base = "https://example-endpoint.openai.azure.com" openai.api_version = "2023-03-15-preview" # create a completion completion = openai.Completion.create(deployment_id="deployment-name", prompt="Hello world") # print the completion print(completion.choices[0].text) ``` Please note that for the moment, the Microsoft Azure endpoints can only be used for completion, embedding, and fine-tuning operations. For a detailed example of how to use fine-tuning and other operations using Azure endpoints, please check out the following Jupyter notebooks: * [Using Azure completions](https://github.com/openai/openai-cookbook/tree/main/examples/azure/completions.ipynb) * [Using Azure fine-tuning](https://github.com/openai/openai-cookbook/tree/main/examples/azure/finetuning.ipynb) * [Using Azure embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/azure/embeddings.ipynb) ### Microsoft Azure Active Directory Authentication In order to use Microsoft Active Directory to authenticate to your Azure endpoint, you need to set the `api_type` to "azure_ad" and pass the acquired credential token to `api_key`. The rest of the parameters need to be set as specified in the previous section. ```python from azure.identity import DefaultAzureCredential import openai # Request credential default_credential = DefaultAzureCredential() token = default_credential.get_token("https://cognitiveservices.azure.com/.default") # Setup parameters openai.api_type = "azure_ad" openai.api_key = token.token openai.api_base = "https://example-endpoint.openai.azure.com/" openai.api_version = "2023-03-15-preview" # ... ``` ### Command-line interface This library additionally provides an `openai` command-line utility which makes it easy to interact with the API from your terminal. Run `openai api -h` for usage. ```sh # list models openai api models.list # create a completion openai api completions.create -m ada -p "Hello world" # create a chat completion openai api chat_completions.create -m gpt-3.5-turbo -g user "Hello world" # generate images via DALL·E API openai api image.create -p "two dogs playing chess, cartoon" -n 1 ``` ## Example code Examples of how to use this Python library to accomplish various tasks can be found in the [OpenAI Cookbook](https://github.com/openai/openai-cookbook/). It contains code examples for: * Classification using fine-tuning * Clustering * Code search * Customizing embeddings * Question answering from a corpus of documents * Recommendations * Visualization of embeddings * And more Prior to July 2022, this OpenAI Python library hosted code examples in its examples folder, but since then all examples have been migrated to the [OpenAI Cookbook](https://github.com/openai/openai-cookbook/). ### Chat Conversational models such as `gpt-3.5-turbo` can be called using the chat completions endpoint. ```python import openai openai.api_key = "sk-..." # supply your API key however you choose completion = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hello world!"}]) print(completion.choices[0].message.content) ``` ### Embeddings In the OpenAI Python library, an embedding represents a text string as a fixed-length vector of floating point numbers. Embeddings are designed to measure the similarity or relevance between text strings. To get an embedding for a text string, you can use the embeddings method as follows in Python: ```python import openai openai.api_key = "sk-..." # supply your API key however you choose # choose text to embed text_string = "sample text" # choose an embedding model_id = "text-similarity-davinci-001" # compute the embedding of the text embedding = openai.Embedding.create(input=text_string, model=model_id)['data'][0]['embedding'] ``` An example of how to call the embeddings method is shown in this [get embeddings notebook](https://github.com/openai/openai-cookbook/blob/main/examples/Get_embeddings.ipynb). Examples of how to use embeddings are shared in the following Jupyter notebooks: - [Classification using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Classification_using_embeddings.ipynb) - [Clustering using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Clustering.ipynb) - [Code search using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Code_search.ipynb) - [Semantic text search using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Semantic_text_search_using_embeddings.ipynb) - [User and product embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/User_and_product_embeddings.ipynb) - [Zero-shot classification using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Zero-shot_classification_with_embeddings.ipynb) - [Recommendation using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Recommendation_using_embeddings.ipynb) For more information on embeddings and the types of embeddings OpenAI offers, read the [embeddings guide](https://beta.openai.com/docs/guides/embeddings) in the OpenAI documentation. ### Fine-tuning Fine-tuning a model on training data can both improve the results (by giving the model more examples to learn from) and reduce the cost/latency of API calls (chiefly through reducing the need to include training examples in prompts). Examples of fine-tuning are shared in the following Jupyter notebooks: - [Classification with fine-tuning](https://github.com/openai/openai-cookbook/blob/main/examples/Fine-tuned_classification.ipynb) (a simple notebook that shows the steps required for fine-tuning) - Fine-tuning a model that answers questions about the 2020 Olympics - [Step 1: Collecting data](https://github.com/openai/openai-cookbook/blob/main/examples/fine-tuned_qa/olympics-1-collect-data.ipynb) - [Step 2: Creating a synthetic Q&A dataset](https://github.com/openai/openai-cookbook/blob/main/examples/fine-tuned_qa/olympics-2-create-qa.ipynb) - [Step 3: Train a fine-tuning model specialized for Q&A](https://github.com/openai/openai-cookbook/blob/main/examples/fine-tuned_qa/olympics-3-train-qa.ipynb) Sync your fine-tunes to [Weights & Biases](https://wandb.me/openai-docs) to track experiments, models, and datasets in your central dashboard with: ```bash openai wandb sync ``` For more information on fine-tuning, read the [fine-tuning guide](https://beta.openai.com/docs/guides/fine-tuning) in the OpenAI documentation. ### Moderation OpenAI provides a Moderation endpoint that can be used to check whether content complies with the OpenAI [content policy](https://platform.openai.com/docs/usage-policies) ```python import openai openai.api_key = "sk-..." # supply your API key however you choose moderation_resp = openai.Moderation.create(input="Here is some perfectly innocuous text that follows all OpenAI content policies.") ``` See the [moderation guide](https://platform.openai.com/docs/guides/moderation) for more details. ## Image generation (DALL·E) ```python import openai openai.api_key = "sk-..." # supply your API key however you choose image_resp = openai.Image.create(prompt="two dogs playing chess, oil painting", n=4, size="512x512") ``` ## Audio transcription (Whisper) ```python import openai openai.api_key = "sk-..." # supply your API key however you choose f = open("path/to/file.mp3", "rb") transcript = openai.Audio.transcribe("whisper-1", f) ``` ## Async API Async support is available in the API by prepending `a` to a network-bound method: ```python import openai openai.api_key = "sk-..." # supply your API key however you choose async def create_completion(): completion_resp = await openai.Completion.acreate(prompt="This is a test", model="davinci") ``` To make async requests more efficient, you can pass in your own ``aiohttp.ClientSession``, but you must manually close the client session at the end of your program/event loop: ```python import openai from aiohttp import ClientSession openai.aiosession.set(ClientSession()) # At the end of your program, close the http session await openai.aiosession.get().close() ``` See the [usage guide](https://platform.openai.com/docs/guides/images) for more details. ## Requirements - Python 3.7.1+ In general, we want to support the versions of Python that our customers are using. If you run into problems with any version issues, please let us know on our [support page](https://help.openai.com/en/). ## Credit This library is forked from the [Stripe Python Library](https://github.com/stripe/stripe-python). %package help Summary: Development documents and examples for openai Provides: python3-openai-doc %description help # OpenAI Python Library The OpenAI Python library provides convenient access to the OpenAI API from applications written in the Python language. It includes a pre-defined set of classes for API resources that initialize themselves dynamically from API responses which makes it compatible with a wide range of versions of the OpenAI API. You can find usage examples for the OpenAI Python library in our [API reference](https://beta.openai.com/docs/api-reference?lang=python) and the [OpenAI Cookbook](https://github.com/openai/openai-cookbook/). ## Installation You don't need this source code unless you want to modify the package. If you just want to use the package, just run: ```sh pip install --upgrade openai ``` Install from source with: ```sh python setup.py install ``` ### Optional dependencies Install dependencies for [`openai.embeddings_utils`](openai/embeddings_utils.py): ```sh pip install openai[embeddings] ``` Install support for [Weights & Biases](https://wandb.me/openai-docs): ``` pip install openai[wandb] ``` Data libraries like `numpy` and `pandas` are not installed by default due to their size. They’re needed for some functionality of this library, but generally not for talking to the API. If you encounter a `MissingDependencyError`, install them with: ```sh pip install openai[datalib] ```` ## Usage The library needs to be configured with your account's secret key which is available on the [website](https://platform.openai.com/account/api-keys). Either set it as the `OPENAI_API_KEY` environment variable before using the library: ```bash export OPENAI_API_KEY='sk-...' ``` Or set `openai.api_key` to its value: ```python import openai openai.api_key = "sk-..." # list models models = openai.Model.list() # print the first model's id print(models.data[0].id) # create a completion completion = openai.Completion.create(model="ada", prompt="Hello world") # print the completion print(completion.choices[0].text) ``` ### Params All endpoints have a `.create` method that supports a `request_timeout` param. This param takes a `Union[float, Tuple[float, float]]` and will raise an `openai.error.Timeout` error if the request exceeds that time in seconds (See: https://requests.readthedocs.io/en/latest/user/quickstart/#timeouts). ### Microsoft Azure Endpoints In order to use the library with Microsoft Azure endpoints, you need to set the `api_type`, `api_base` and `api_version` in addition to the `api_key`. The `api_type` must be set to 'azure' and the others correspond to the properties of your endpoint. In addition, the deployment name must be passed as the engine parameter. ```python import openai openai.api_type = "azure" openai.api_key = "..." openai.api_base = "https://example-endpoint.openai.azure.com" openai.api_version = "2023-03-15-preview" # create a completion completion = openai.Completion.create(deployment_id="deployment-name", prompt="Hello world") # print the completion print(completion.choices[0].text) ``` Please note that for the moment, the Microsoft Azure endpoints can only be used for completion, embedding, and fine-tuning operations. For a detailed example of how to use fine-tuning and other operations using Azure endpoints, please check out the following Jupyter notebooks: * [Using Azure completions](https://github.com/openai/openai-cookbook/tree/main/examples/azure/completions.ipynb) * [Using Azure fine-tuning](https://github.com/openai/openai-cookbook/tree/main/examples/azure/finetuning.ipynb) * [Using Azure embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/azure/embeddings.ipynb) ### Microsoft Azure Active Directory Authentication In order to use Microsoft Active Directory to authenticate to your Azure endpoint, you need to set the `api_type` to "azure_ad" and pass the acquired credential token to `api_key`. The rest of the parameters need to be set as specified in the previous section. ```python from azure.identity import DefaultAzureCredential import openai # Request credential default_credential = DefaultAzureCredential() token = default_credential.get_token("https://cognitiveservices.azure.com/.default") # Setup parameters openai.api_type = "azure_ad" openai.api_key = token.token openai.api_base = "https://example-endpoint.openai.azure.com/" openai.api_version = "2023-03-15-preview" # ... ``` ### Command-line interface This library additionally provides an `openai` command-line utility which makes it easy to interact with the API from your terminal. Run `openai api -h` for usage. ```sh # list models openai api models.list # create a completion openai api completions.create -m ada -p "Hello world" # create a chat completion openai api chat_completions.create -m gpt-3.5-turbo -g user "Hello world" # generate images via DALL·E API openai api image.create -p "two dogs playing chess, cartoon" -n 1 ``` ## Example code Examples of how to use this Python library to accomplish various tasks can be found in the [OpenAI Cookbook](https://github.com/openai/openai-cookbook/). It contains code examples for: * Classification using fine-tuning * Clustering * Code search * Customizing embeddings * Question answering from a corpus of documents * Recommendations * Visualization of embeddings * And more Prior to July 2022, this OpenAI Python library hosted code examples in its examples folder, but since then all examples have been migrated to the [OpenAI Cookbook](https://github.com/openai/openai-cookbook/). ### Chat Conversational models such as `gpt-3.5-turbo` can be called using the chat completions endpoint. ```python import openai openai.api_key = "sk-..." # supply your API key however you choose completion = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hello world!"}]) print(completion.choices[0].message.content) ``` ### Embeddings In the OpenAI Python library, an embedding represents a text string as a fixed-length vector of floating point numbers. Embeddings are designed to measure the similarity or relevance between text strings. To get an embedding for a text string, you can use the embeddings method as follows in Python: ```python import openai openai.api_key = "sk-..." # supply your API key however you choose # choose text to embed text_string = "sample text" # choose an embedding model_id = "text-similarity-davinci-001" # compute the embedding of the text embedding = openai.Embedding.create(input=text_string, model=model_id)['data'][0]['embedding'] ``` An example of how to call the embeddings method is shown in this [get embeddings notebook](https://github.com/openai/openai-cookbook/blob/main/examples/Get_embeddings.ipynb). Examples of how to use embeddings are shared in the following Jupyter notebooks: - [Classification using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Classification_using_embeddings.ipynb) - [Clustering using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Clustering.ipynb) - [Code search using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Code_search.ipynb) - [Semantic text search using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Semantic_text_search_using_embeddings.ipynb) - [User and product embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/User_and_product_embeddings.ipynb) - [Zero-shot classification using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Zero-shot_classification_with_embeddings.ipynb) - [Recommendation using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Recommendation_using_embeddings.ipynb) For more information on embeddings and the types of embeddings OpenAI offers, read the [embeddings guide](https://beta.openai.com/docs/guides/embeddings) in the OpenAI documentation. ### Fine-tuning Fine-tuning a model on training data can both improve the results (by giving the model more examples to learn from) and reduce the cost/latency of API calls (chiefly through reducing the need to include training examples in prompts). Examples of fine-tuning are shared in the following Jupyter notebooks: - [Classification with fine-tuning](https://github.com/openai/openai-cookbook/blob/main/examples/Fine-tuned_classification.ipynb) (a simple notebook that shows the steps required for fine-tuning) - Fine-tuning a model that answers questions about the 2020 Olympics - [Step 1: Collecting data](https://github.com/openai/openai-cookbook/blob/main/examples/fine-tuned_qa/olympics-1-collect-data.ipynb) - [Step 2: Creating a synthetic Q&A dataset](https://github.com/openai/openai-cookbook/blob/main/examples/fine-tuned_qa/olympics-2-create-qa.ipynb) - [Step 3: Train a fine-tuning model specialized for Q&A](https://github.com/openai/openai-cookbook/blob/main/examples/fine-tuned_qa/olympics-3-train-qa.ipynb) Sync your fine-tunes to [Weights & Biases](https://wandb.me/openai-docs) to track experiments, models, and datasets in your central dashboard with: ```bash openai wandb sync ``` For more information on fine-tuning, read the [fine-tuning guide](https://beta.openai.com/docs/guides/fine-tuning) in the OpenAI documentation. ### Moderation OpenAI provides a Moderation endpoint that can be used to check whether content complies with the OpenAI [content policy](https://platform.openai.com/docs/usage-policies) ```python import openai openai.api_key = "sk-..." # supply your API key however you choose moderation_resp = openai.Moderation.create(input="Here is some perfectly innocuous text that follows all OpenAI content policies.") ``` See the [moderation guide](https://platform.openai.com/docs/guides/moderation) for more details. ## Image generation (DALL·E) ```python import openai openai.api_key = "sk-..." # supply your API key however you choose image_resp = openai.Image.create(prompt="two dogs playing chess, oil painting", n=4, size="512x512") ``` ## Audio transcription (Whisper) ```python import openai openai.api_key = "sk-..." # supply your API key however you choose f = open("path/to/file.mp3", "rb") transcript = openai.Audio.transcribe("whisper-1", f) ``` ## Async API Async support is available in the API by prepending `a` to a network-bound method: ```python import openai openai.api_key = "sk-..." # supply your API key however you choose async def create_completion(): completion_resp = await openai.Completion.acreate(prompt="This is a test", model="davinci") ``` To make async requests more efficient, you can pass in your own ``aiohttp.ClientSession``, but you must manually close the client session at the end of your program/event loop: ```python import openai from aiohttp import ClientSession openai.aiosession.set(ClientSession()) # At the end of your program, close the http session await openai.aiosession.get().close() ``` See the [usage guide](https://platform.openai.com/docs/guides/images) for more details. ## Requirements - Python 3.7.1+ In general, we want to support the versions of Python that our customers are using. If you run into problems with any version issues, please let us know on our [support page](https://help.openai.com/en/). ## Credit This library is forked from the [Stripe Python Library](https://github.com/stripe/stripe-python). %prep %autosetup -n openai-0.27.4 %build %py3_build %install %py3_install install -d -m755 %{buildroot}/%{_pkgdocdir} if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi pushd %{buildroot} if [ -d usr/lib ]; then find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/lib64 ]; then find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/bin ]; then find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/sbin ]; then find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst fi touch doclist.lst if [ -d usr/share/man ]; then find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst fi popd mv %{buildroot}/filelist.lst . mv %{buildroot}/doclist.lst . %files -n python3-openai -f filelist.lst %dir %{python3_sitelib}/* %files help -f doclist.lst %{_docdir}/* %changelog * Mon Apr 10 2023 Python_Bot - 0.27.4-1 - Package Spec generated