summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-05-05 09:15:42 +0000
committerCoprDistGit <infra@openeuler.org>2023-05-05 09:15:42 +0000
commitd41c5836fe4a27dec8802d6a4c281569bb04a466 (patch)
tree8b7d27a9b28c8fd967bccc295f98b5b9e2010112
parent24c40e1170736af9dbbe948477411d9f46293f30 (diff)
automatic import of python-pyotritonclientopeneuler20.03
-rw-r--r--.gitignore1
-rw-r--r--python-pyotritonclient.spec430
-rw-r--r--sources1
3 files changed, 432 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..15bd82c 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/pyotritonclient-0.2.5.tar.gz
diff --git a/python-pyotritonclient.spec b/python-pyotritonclient.spec
new file mode 100644
index 0000000..565af12
--- /dev/null
+++ b/python-pyotritonclient.spec
@@ -0,0 +1,430 @@
+%global _empty_manifest_terminate_build 0
+Name: python-pyotritonclient
+Version: 0.2.5
+Release: 1
+Summary: A lightweight http client library for communicating with Nvidia Triton Inference Server (with Pyodide support in the browser)
+License: BSD
+URL: https://github.com/oeway/pyotritonclient
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/d7/08/e36fa0510c40f278bb068b3348f4d1ff0b3e0a2eedf081d1f33bd9e05220/pyotritonclient-0.2.5.tar.gz
+BuildArch: noarch
+
+Requires: python3-six
+Requires: python3-numpy
+Requires: python3-imjoy-rpc
+Requires: python3-msgpack
+Requires: python3-requests
+Requires: python3-rapidjson
+Requires: python3-imjoy-rpc
+
+%description
+# Triton HTTP Client for Pyodide
+
+A Pyodide python http client library and utilities for communicating with Triton Inference Server (based on tritonclient from NVIDIA).
+
+
+This is a simplified implemetation of the triton client from NVIDIA, it works both in the browser with Pyodide Python or the native Python.
+It only implement the http client, and most of the API remains the similar but changed into async and with additional utility functions.
+
+## Installation
+
+To use it in native CPython, you can install the package by running:
+```
+pip install pyotritonclient
+```
+
+For Pyodide-based Python environment, for example: [JupyterLite](https://jupyterlite.readthedocs.io/en/latest/_static/lab/index.html) or [Pyodide console](https://pyodide-cdn2.iodide.io/dev/full/console.html), you can install the client by running the following python code:
+```python
+import micropip
+micropip.install("pyotritonclient")
+```
+## Usage
+
+### Basic example
+To execute the model, we provide utility functions to make it much easier:
+```python
+import numpy as np
+from pyotritonclient import execute
+
+# create fake input tensors
+input0 = np.zeros([2, 349, 467], dtype='float32')
+# run inference
+results = await execute(inputs=[input0, {"diameter": 30}], server_url='https://ai.imjoy.io/triton', model_name='cellpose-python')
+```
+
+The above example assumes you are running the code in a jupyter notebook or an environment supports top-level await, if you are trying the example code in a normal python script, please wrap the code into an async function and run with asyncio as follows:
+```python
+import asyncio
+import numpy as np
+from pyotritonclient import execute
+
+async def run():
+ results = await execute(inputs=[np.zeros([2, 349, 467], dtype='float32'), {"diameter": 30}], server_url='https://ai.imjoy.io/triton', model_name='cellpose-python')
+ print(results)
+
+loop = asyncio.get_event_loop()
+loop.run_until_complete(run())
+```
+
+You can access the lower level api, see the [test example](./tests/test_client.py).
+
+You can also find the official [client examples](https://github.com/triton-inference-server/client/tree/main/src/python/examples) demonstrate how to use the
+package to issue request to [triton inference server](https://github.com/triton-inference-server/server). However, please notice that you will need to
+change the http client code into async style. For example, instead of doing `client.infer(...)`, you need to do `await client.infer(...)`.
+
+The http client code is forked from [triton client git repo](https://github.com/triton-inference-server/client) since commit [b3005f9db154247a4c792633e54f25f35ccadff0](https://github.com/triton-inference-server/client/tree/b3005f9db154247a4c792633e54f25f35ccadff0).
+
+
+### Using the sequence executor with stateful models
+To simplify the manipulation on stateful models with sequence, we also provide the `SequenceExecutor` to make it easier to run models in a sequence.
+```python
+from pyotritonclient import SequenceExcutor
+
+
+seq = SequenceExcutor(
+ server_url='https://ai.imjoy.io/triton',
+ model_name='cellpose-train',
+ sequence_id=100
+)
+inputs = [
+ image.astype('float32'),
+ labels.astype('float32'),
+ {"steps": 1, "resume": True}
+]
+for (image, labels, info) in train_samples:
+ result = await seq.step(inputs)
+
+result = await seq.end(inputs)
+```
+
+Note that above example called `seq.end()` by sending the last inputs again to end the sequence. If you want to specify the inputs for the execution, you can run `result = await se.end(inputs)`.
+
+For a small batch of data, you can also run it like this:
+```python
+from pyotritonclient import SequenceExcutor
+
+seq = SequenceExcutor(
+ server_url='https://ai.imjoy.io/triton',
+ model_name='cellpose-train',
+ sequence_id=100
+)
+
+# a list of inputs
+inputs_batch = [[
+ image.astype('float32'),
+ labels.astype('float32'),
+ {"steps": 1, "resume": True}
+] for (image, labels, _) in train_samples]
+
+def on_step(i, result):
+ """Function called on every step"""
+ print(i)
+
+results = await seq(inputs_batch, on_step=on_step)
+```
+
+
+
+## Server setup
+Since we access the server from the browser environment which typically has more security restrictions, it is important that the server is configured to enable browser access.
+
+Please make sure you configured following aspects:
+ * The server must provide HTTPS endpoints instead of HTTP
+ * The server should send the following headers:
+ - `Access-Control-Allow-Headers: Inference-Header-Content-Length,Accept-Encoding,Content-Encoding,Access-Control-Allow-Headers`
+ - `Access-Control-Expose-Headers: Inference-Header-Content-Length,Range,Origin,Content-Type`
+ - `Access-Control-Allow-Methods: GET,HEAD,OPTIONS,PUT,POST`
+ - `Access-Control-Allow-Origin: *` (This is optional depending on whether you want to support CORS)
+
+
+%package -n python3-pyotritonclient
+Summary: A lightweight http client library for communicating with Nvidia Triton Inference Server (with Pyodide support in the browser)
+Provides: python-pyotritonclient
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-pyotritonclient
+# Triton HTTP Client for Pyodide
+
+A Pyodide python http client library and utilities for communicating with Triton Inference Server (based on tritonclient from NVIDIA).
+
+
+This is a simplified implemetation of the triton client from NVIDIA, it works both in the browser with Pyodide Python or the native Python.
+It only implement the http client, and most of the API remains the similar but changed into async and with additional utility functions.
+
+## Installation
+
+To use it in native CPython, you can install the package by running:
+```
+pip install pyotritonclient
+```
+
+For Pyodide-based Python environment, for example: [JupyterLite](https://jupyterlite.readthedocs.io/en/latest/_static/lab/index.html) or [Pyodide console](https://pyodide-cdn2.iodide.io/dev/full/console.html), you can install the client by running the following python code:
+```python
+import micropip
+micropip.install("pyotritonclient")
+```
+## Usage
+
+### Basic example
+To execute the model, we provide utility functions to make it much easier:
+```python
+import numpy as np
+from pyotritonclient import execute
+
+# create fake input tensors
+input0 = np.zeros([2, 349, 467], dtype='float32')
+# run inference
+results = await execute(inputs=[input0, {"diameter": 30}], server_url='https://ai.imjoy.io/triton', model_name='cellpose-python')
+```
+
+The above example assumes you are running the code in a jupyter notebook or an environment supports top-level await, if you are trying the example code in a normal python script, please wrap the code into an async function and run with asyncio as follows:
+```python
+import asyncio
+import numpy as np
+from pyotritonclient import execute
+
+async def run():
+ results = await execute(inputs=[np.zeros([2, 349, 467], dtype='float32'), {"diameter": 30}], server_url='https://ai.imjoy.io/triton', model_name='cellpose-python')
+ print(results)
+
+loop = asyncio.get_event_loop()
+loop.run_until_complete(run())
+```
+
+You can access the lower level api, see the [test example](./tests/test_client.py).
+
+You can also find the official [client examples](https://github.com/triton-inference-server/client/tree/main/src/python/examples) demonstrate how to use the
+package to issue request to [triton inference server](https://github.com/triton-inference-server/server). However, please notice that you will need to
+change the http client code into async style. For example, instead of doing `client.infer(...)`, you need to do `await client.infer(...)`.
+
+The http client code is forked from [triton client git repo](https://github.com/triton-inference-server/client) since commit [b3005f9db154247a4c792633e54f25f35ccadff0](https://github.com/triton-inference-server/client/tree/b3005f9db154247a4c792633e54f25f35ccadff0).
+
+
+### Using the sequence executor with stateful models
+To simplify the manipulation on stateful models with sequence, we also provide the `SequenceExecutor` to make it easier to run models in a sequence.
+```python
+from pyotritonclient import SequenceExcutor
+
+
+seq = SequenceExcutor(
+ server_url='https://ai.imjoy.io/triton',
+ model_name='cellpose-train',
+ sequence_id=100
+)
+inputs = [
+ image.astype('float32'),
+ labels.astype('float32'),
+ {"steps": 1, "resume": True}
+]
+for (image, labels, info) in train_samples:
+ result = await seq.step(inputs)
+
+result = await seq.end(inputs)
+```
+
+Note that above example called `seq.end()` by sending the last inputs again to end the sequence. If you want to specify the inputs for the execution, you can run `result = await se.end(inputs)`.
+
+For a small batch of data, you can also run it like this:
+```python
+from pyotritonclient import SequenceExcutor
+
+seq = SequenceExcutor(
+ server_url='https://ai.imjoy.io/triton',
+ model_name='cellpose-train',
+ sequence_id=100
+)
+
+# a list of inputs
+inputs_batch = [[
+ image.astype('float32'),
+ labels.astype('float32'),
+ {"steps": 1, "resume": True}
+] for (image, labels, _) in train_samples]
+
+def on_step(i, result):
+ """Function called on every step"""
+ print(i)
+
+results = await seq(inputs_batch, on_step=on_step)
+```
+
+
+
+## Server setup
+Since we access the server from the browser environment which typically has more security restrictions, it is important that the server is configured to enable browser access.
+
+Please make sure you configured following aspects:
+ * The server must provide HTTPS endpoints instead of HTTP
+ * The server should send the following headers:
+ - `Access-Control-Allow-Headers: Inference-Header-Content-Length,Accept-Encoding,Content-Encoding,Access-Control-Allow-Headers`
+ - `Access-Control-Expose-Headers: Inference-Header-Content-Length,Range,Origin,Content-Type`
+ - `Access-Control-Allow-Methods: GET,HEAD,OPTIONS,PUT,POST`
+ - `Access-Control-Allow-Origin: *` (This is optional depending on whether you want to support CORS)
+
+
+%package help
+Summary: Development documents and examples for pyotritonclient
+Provides: python3-pyotritonclient-doc
+%description help
+# Triton HTTP Client for Pyodide
+
+A Pyodide python http client library and utilities for communicating with Triton Inference Server (based on tritonclient from NVIDIA).
+
+
+This is a simplified implemetation of the triton client from NVIDIA, it works both in the browser with Pyodide Python or the native Python.
+It only implement the http client, and most of the API remains the similar but changed into async and with additional utility functions.
+
+## Installation
+
+To use it in native CPython, you can install the package by running:
+```
+pip install pyotritonclient
+```
+
+For Pyodide-based Python environment, for example: [JupyterLite](https://jupyterlite.readthedocs.io/en/latest/_static/lab/index.html) or [Pyodide console](https://pyodide-cdn2.iodide.io/dev/full/console.html), you can install the client by running the following python code:
+```python
+import micropip
+micropip.install("pyotritonclient")
+```
+## Usage
+
+### Basic example
+To execute the model, we provide utility functions to make it much easier:
+```python
+import numpy as np
+from pyotritonclient import execute
+
+# create fake input tensors
+input0 = np.zeros([2, 349, 467], dtype='float32')
+# run inference
+results = await execute(inputs=[input0, {"diameter": 30}], server_url='https://ai.imjoy.io/triton', model_name='cellpose-python')
+```
+
+The above example assumes you are running the code in a jupyter notebook or an environment supports top-level await, if you are trying the example code in a normal python script, please wrap the code into an async function and run with asyncio as follows:
+```python
+import asyncio
+import numpy as np
+from pyotritonclient import execute
+
+async def run():
+ results = await execute(inputs=[np.zeros([2, 349, 467], dtype='float32'), {"diameter": 30}], server_url='https://ai.imjoy.io/triton', model_name='cellpose-python')
+ print(results)
+
+loop = asyncio.get_event_loop()
+loop.run_until_complete(run())
+```
+
+You can access the lower level api, see the [test example](./tests/test_client.py).
+
+You can also find the official [client examples](https://github.com/triton-inference-server/client/tree/main/src/python/examples) demonstrate how to use the
+package to issue request to [triton inference server](https://github.com/triton-inference-server/server). However, please notice that you will need to
+change the http client code into async style. For example, instead of doing `client.infer(...)`, you need to do `await client.infer(...)`.
+
+The http client code is forked from [triton client git repo](https://github.com/triton-inference-server/client) since commit [b3005f9db154247a4c792633e54f25f35ccadff0](https://github.com/triton-inference-server/client/tree/b3005f9db154247a4c792633e54f25f35ccadff0).
+
+
+### Using the sequence executor with stateful models
+To simplify the manipulation on stateful models with sequence, we also provide the `SequenceExecutor` to make it easier to run models in a sequence.
+```python
+from pyotritonclient import SequenceExcutor
+
+
+seq = SequenceExcutor(
+ server_url='https://ai.imjoy.io/triton',
+ model_name='cellpose-train',
+ sequence_id=100
+)
+inputs = [
+ image.astype('float32'),
+ labels.astype('float32'),
+ {"steps": 1, "resume": True}
+]
+for (image, labels, info) in train_samples:
+ result = await seq.step(inputs)
+
+result = await seq.end(inputs)
+```
+
+Note that above example called `seq.end()` by sending the last inputs again to end the sequence. If you want to specify the inputs for the execution, you can run `result = await se.end(inputs)`.
+
+For a small batch of data, you can also run it like this:
+```python
+from pyotritonclient import SequenceExcutor
+
+seq = SequenceExcutor(
+ server_url='https://ai.imjoy.io/triton',
+ model_name='cellpose-train',
+ sequence_id=100
+)
+
+# a list of inputs
+inputs_batch = [[
+ image.astype('float32'),
+ labels.astype('float32'),
+ {"steps": 1, "resume": True}
+] for (image, labels, _) in train_samples]
+
+def on_step(i, result):
+ """Function called on every step"""
+ print(i)
+
+results = await seq(inputs_batch, on_step=on_step)
+```
+
+
+
+## Server setup
+Since we access the server from the browser environment which typically has more security restrictions, it is important that the server is configured to enable browser access.
+
+Please make sure you configured following aspects:
+ * The server must provide HTTPS endpoints instead of HTTP
+ * The server should send the following headers:
+ - `Access-Control-Allow-Headers: Inference-Header-Content-Length,Accept-Encoding,Content-Encoding,Access-Control-Allow-Headers`
+ - `Access-Control-Expose-Headers: Inference-Header-Content-Length,Range,Origin,Content-Type`
+ - `Access-Control-Allow-Methods: GET,HEAD,OPTIONS,PUT,POST`
+ - `Access-Control-Allow-Origin: *` (This is optional depending on whether you want to support CORS)
+
+
+%prep
+%autosetup -n pyotritonclient-0.2.5
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-pyotritonclient -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Fri May 05 2023 Python_Bot <Python_Bot@openeuler.org> - 0.2.5-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..c0846ea
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+e9acebee05f382322203115d664d5172 pyotritonclient-0.2.5.tar.gz