%global _empty_manifest_terminate_build 0 Name: python-batchkit Version: 0.9.14 Release: 1 Summary: Generic batch processing framework for managing the orchestration, dispatch, fault tolerance, and monitoring of arbitrary work items against many endpoints. Extensible via dependency injection. License: MIT URL: https://github.com/microsoft/batch-processing-kit Source0: https://mirrors.aliyun.com/pypi/web/packages/69/91/fcc0612ea3be57232510057f0d8cde37d679b98680856cae384e8cb191a7/batchkit-0.9.14.tar.gz BuildArch: noarch Requires: python3-requests Requires: python3-Cerberus Requires: python3-deepdiff Requires: python3-PyYAML Requires: python3-mock Requires: python3-Flask Requires: python3-jsonpickle Requires: python3-psutil Requires: python3-pyinotify %description # Introduction Generic batch processing framework for managing the orchestration, dispatch, fault tolerance, and monitoring of arbitrary work items against many endpoints. Extensible via dependency injection. Worker endpoints can be local, remote, containers, cloud APIs, different processes, or even just different listener sockets in the same process. Includes examples against Azure Cognitive Service containers for ML eval workloads. # Consuming The framework can be built on via template method pattern and dependency injection. One simply needs to provide concrete implementation for the following types: `WorkItemRequest`: Encapsulates all the details needed by the `WorkItemProcessor` to process a work item. `WorkItemResult`: Representation of the outcome of an attempt to process a `WorkItemRequest`. `WorkItemProcessor`: Provides implementation on how to process a `WorkItemRequest` against an endpoint. `BatchRequest`: Represents a batch of work items to do. Produces a collection of `WorkItemRequest`s. `BatchConfig`: Details needed for a `BatchRequest` to produce the collection of `WorkItemRequest`s. `BatchRunSummarizer`: Implements a near-real-time status updater based on `WorkItemResult`s as the batch progresses. `EndpointStatusChecker`: Specifies how to determine whether an endpoint is healthy and ready to take on work from a `WorkItemProcessor`. The [Speech Batch Kit](https://github.com/microsoft/batch-processing-kit/blob/master/batchkit_examples/speech_sdk/README.md) is currently our prime example for consuming the framework. The `batchkit` package is available as an ordinary pypi package. See versions here: https://pypi.org/project/batchkit # Dev Environment This project is developed for and consumed in Linux environments. Consumers also use WSL2, and other POSIX platforms may be compatible but are untested. For development and deployment outside of a container, we recommend using a Python virtual environment to install the `requirements.txt`. The [Speech Batch Kit](https://github.com/microsoft/batch-processing-kit/blob/master/batchkit_examples/speech_sdk/README.md) example [builds a container](https://github.com/microsoft/batch-processing-kit/blob/master/batchkit_examples/speech_sdk/build-docker). ## Tests This project uses both unit tests `run-tests` and stress tests `run-stress-tests` for functional verification. ## Building There are currently 3 artifacts: - The pypi library of the batchkit framework as a library. - The pypi library of the batchkit-examples-speechsdk. - Docker container image for speech-batch-kit. # Examples ### Speech Batch Kit The Speech Batch Kit (batchkit_examples/speech_sdk) uses the framework to produce a tool that can be used for transcription of very large numbers of audio files against Azure Cognitive Service Speech containers or cloud endpoints. For introduction, see the [Azure Cognitive Services page](https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-container-batch-processing). For detailed information, see the [Speech Batch Kit's README](https://github.com/microsoft/batch-processing-kit/blob/master/batchkit_examples/speech_sdk/README.md). # Contributing This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com. When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repositories using our CLA. This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. %package -n python3-batchkit Summary: Generic batch processing framework for managing the orchestration, dispatch, fault tolerance, and monitoring of arbitrary work items against many endpoints. Extensible via dependency injection. Provides: python-batchkit BuildRequires: python3-devel BuildRequires: python3-setuptools BuildRequires: python3-pip %description -n python3-batchkit # Introduction Generic batch processing framework for managing the orchestration, dispatch, fault tolerance, and monitoring of arbitrary work items against many endpoints. Extensible via dependency injection. Worker endpoints can be local, remote, containers, cloud APIs, different processes, or even just different listener sockets in the same process. Includes examples against Azure Cognitive Service containers for ML eval workloads. # Consuming The framework can be built on via template method pattern and dependency injection. One simply needs to provide concrete implementation for the following types: `WorkItemRequest`: Encapsulates all the details needed by the `WorkItemProcessor` to process a work item. `WorkItemResult`: Representation of the outcome of an attempt to process a `WorkItemRequest`. `WorkItemProcessor`: Provides implementation on how to process a `WorkItemRequest` against an endpoint. `BatchRequest`: Represents a batch of work items to do. Produces a collection of `WorkItemRequest`s. `BatchConfig`: Details needed for a `BatchRequest` to produce the collection of `WorkItemRequest`s. `BatchRunSummarizer`: Implements a near-real-time status updater based on `WorkItemResult`s as the batch progresses. `EndpointStatusChecker`: Specifies how to determine whether an endpoint is healthy and ready to take on work from a `WorkItemProcessor`. The [Speech Batch Kit](https://github.com/microsoft/batch-processing-kit/blob/master/batchkit_examples/speech_sdk/README.md) is currently our prime example for consuming the framework. The `batchkit` package is available as an ordinary pypi package. See versions here: https://pypi.org/project/batchkit # Dev Environment This project is developed for and consumed in Linux environments. Consumers also use WSL2, and other POSIX platforms may be compatible but are untested. For development and deployment outside of a container, we recommend using a Python virtual environment to install the `requirements.txt`. The [Speech Batch Kit](https://github.com/microsoft/batch-processing-kit/blob/master/batchkit_examples/speech_sdk/README.md) example [builds a container](https://github.com/microsoft/batch-processing-kit/blob/master/batchkit_examples/speech_sdk/build-docker). ## Tests This project uses both unit tests `run-tests` and stress tests `run-stress-tests` for functional verification. ## Building There are currently 3 artifacts: - The pypi library of the batchkit framework as a library. - The pypi library of the batchkit-examples-speechsdk. - Docker container image for speech-batch-kit. # Examples ### Speech Batch Kit The Speech Batch Kit (batchkit_examples/speech_sdk) uses the framework to produce a tool that can be used for transcription of very large numbers of audio files against Azure Cognitive Service Speech containers or cloud endpoints. For introduction, see the [Azure Cognitive Services page](https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-container-batch-processing). For detailed information, see the [Speech Batch Kit's README](https://github.com/microsoft/batch-processing-kit/blob/master/batchkit_examples/speech_sdk/README.md). # Contributing This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com. When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repositories using our CLA. This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. %package help Summary: Development documents and examples for batchkit Provides: python3-batchkit-doc %description help # Introduction Generic batch processing framework for managing the orchestration, dispatch, fault tolerance, and monitoring of arbitrary work items against many endpoints. Extensible via dependency injection. Worker endpoints can be local, remote, containers, cloud APIs, different processes, or even just different listener sockets in the same process. Includes examples against Azure Cognitive Service containers for ML eval workloads. # Consuming The framework can be built on via template method pattern and dependency injection. One simply needs to provide concrete implementation for the following types: `WorkItemRequest`: Encapsulates all the details needed by the `WorkItemProcessor` to process a work item. `WorkItemResult`: Representation of the outcome of an attempt to process a `WorkItemRequest`. `WorkItemProcessor`: Provides implementation on how to process a `WorkItemRequest` against an endpoint. `BatchRequest`: Represents a batch of work items to do. Produces a collection of `WorkItemRequest`s. `BatchConfig`: Details needed for a `BatchRequest` to produce the collection of `WorkItemRequest`s. `BatchRunSummarizer`: Implements a near-real-time status updater based on `WorkItemResult`s as the batch progresses. `EndpointStatusChecker`: Specifies how to determine whether an endpoint is healthy and ready to take on work from a `WorkItemProcessor`. The [Speech Batch Kit](https://github.com/microsoft/batch-processing-kit/blob/master/batchkit_examples/speech_sdk/README.md) is currently our prime example for consuming the framework. The `batchkit` package is available as an ordinary pypi package. See versions here: https://pypi.org/project/batchkit # Dev Environment This project is developed for and consumed in Linux environments. Consumers also use WSL2, and other POSIX platforms may be compatible but are untested. For development and deployment outside of a container, we recommend using a Python virtual environment to install the `requirements.txt`. The [Speech Batch Kit](https://github.com/microsoft/batch-processing-kit/blob/master/batchkit_examples/speech_sdk/README.md) example [builds a container](https://github.com/microsoft/batch-processing-kit/blob/master/batchkit_examples/speech_sdk/build-docker). ## Tests This project uses both unit tests `run-tests` and stress tests `run-stress-tests` for functional verification. ## Building There are currently 3 artifacts: - The pypi library of the batchkit framework as a library. - The pypi library of the batchkit-examples-speechsdk. - Docker container image for speech-batch-kit. # Examples ### Speech Batch Kit The Speech Batch Kit (batchkit_examples/speech_sdk) uses the framework to produce a tool that can be used for transcription of very large numbers of audio files against Azure Cognitive Service Speech containers or cloud endpoints. For introduction, see the [Azure Cognitive Services page](https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-container-batch-processing). For detailed information, see the [Speech Batch Kit's README](https://github.com/microsoft/batch-processing-kit/blob/master/batchkit_examples/speech_sdk/README.md). # Contributing This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com. When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repositories using our CLA. This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. %prep %autosetup -n batchkit-0.9.14 %build %py3_build %install %py3_install install -d -m755 %{buildroot}/%{_pkgdocdir} if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi pushd %{buildroot} if [ -d usr/lib ]; then find usr/lib -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi if [ -d usr/lib64 ]; then find usr/lib64 -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi if [ -d usr/bin ]; then find usr/bin -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi if [ -d usr/sbin ]; then find usr/sbin -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi touch doclist.lst if [ -d usr/share/man ]; then find usr/share/man -type f -printf "\"/%h/%f.gz\"\n" >> doclist.lst fi popd mv %{buildroot}/filelist.lst . mv %{buildroot}/doclist.lst . %files -n python3-batchkit -f filelist.lst %dir %{python3_sitelib}/* %files help -f doclist.lst %{_docdir}/* %changelog * Tue Jun 20 2023 Python_Bot - 0.9.14-1 - Package Spec generated