summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-05-15 04:36:51 +0000
committerCoprDistGit <infra@openeuler.org>2023-05-15 04:36:51 +0000
commit7955c1b43f71c05abfde9991fee1037114d6dc10 (patch)
treeddb1e4b806d68cc0005a162c0f4edcc7bb6840dd
parent04769da359fb7dd8dfe81697a7a77f7128856ee5 (diff)
automatic import of python-cortex-serving-client
-rw-r--r--.gitignore1
-rw-r--r--python-cortex-serving-client.spec533
-rw-r--r--sources1
3 files changed, 535 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..c08a048 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/cortex-serving-client-0.42.1.tar.gz
diff --git a/python-cortex-serving-client.spec b/python-cortex-serving-client.spec
new file mode 100644
index 0000000..5f83f56
--- /dev/null
+++ b/python-cortex-serving-client.spec
@@ -0,0 +1,533 @@
+%global _empty_manifest_terminate_build 0
+Name: python-cortex-serving-client
+Version: 0.42.1
+Release: 1
+Summary: Cortex.dev ML Serving Client for Python with garbage API collection.
+License: MIT License
+URL: https://github.com/glami/cortex-serving-client
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/01/71/ac8dbc4b935a49b550a3c6fe2c044ac8eea2e8422537bc42830c2c5c47c6/cortex-serving-client-0.42.1.tar.gz
+BuildArch: noarch
+
+Requires: python3-cortex
+Requires: python3-PyYAML
+Requires: python3-psycopg2-binary
+Requires: python3-boto3
+Requires: python3-psutil
+
+%description
+# Cortex Serving Client
+
+<img src="https://raw.githubusercontent.com/glami/cortex-serving-client/master/cortex-serving-client-logo-2.svg" alt="Cortex Serving Client" style="max-width: 200px">
+
+
+## Warning: Cortex Labs joined Databricks and maintenance may end. Consider migrating to other tools.
+
+Cortex Serving Client makes Python serving automation simple.
+It is a Python wrapper around [Cortex's command-line client](https://cortex.dev) that provides garbage API collection.
+Cortex has [official Python client now](https://pypi.org/project/cortex/) ([source](https://github.com/cortexlabs/cortex/blob/e22985f8516fe8db930aaecd05269da99d5e7a93/pkg/cortex/client/cortex/client.py)), but this project offers advanced features (GC, temporary deployment, timeouts) not present in the vanilla.
+
+Main feature of this package is that you can use it on top of your codebase created for Cortex Version <= 0.34, meaning that:
+ - deployment directory is automatically zipped and uploaded to S3 bucket
+ - we prepared a base docker image that downloads this zipped code, unzips it and runs it in an Uvicorn worker
+ - => you can simply deploy your `PythonPredictor` using Cortex 0.42 without having to wrap it inside your own docker image
+
+Additional features:
+ - Automate your Cortex AWS cluster from Python.
+ - Prevent accidental charges by auto-removing deployments that exceeded a timeout.
+ - Execute operations: deploy, delete, get, get all.
+ - Stream remote logs into the local log with thread name set to the API name.
+ - Supported Cortex Version: 0.40.0 (See requirements.txt)
+
+Here is [a video about the package (version for Cortex 0.33 = before the big changes)](https://youtu.be/aU95dBAspr0?t=510).
+
+## How Does It Work?
+
+After implementing your predictor module in a folder (see `example/dummy_dir`),
+you can deploy it to your Cortex cluster,
+and execute a prediction via a POST request.
+
+Here is [a video of the demo below](https://youtu.be/aU95dBAspr0?t=1261).
+
+### Working Example
+Below is a snippet from [example.py](/example/example.py):
+
+The deployment dict has these additional fields compared to Cortex docs:
+ - `"project_name":<string>` in deployment root
+ - name of the project, zipped source code is going to be uploaded to S3 path: `<project_name>/<api_name>.zip`
+ - `predictor_path`: Module containing your predictor: `cls.__module__` = e.g. `predictors.my_predictor`
+ - Optional `predictor_class_name`: `cls.__name__` of your predictor class, default is `PythonPredictor`
+ - `"config":<dict>` in `container` specification
+ - config dict that will be saved to `predictor_config.json` in the root of deployment dir
+ - this file can then be loaded in `main.py` and passed to the PythonPredictor constructor = can be seen in `resources/main.py`
+
+
+```python
+deployment = {
+ "name": "dummy-a",
+ "project_name": "test",
+ "kind": "RealtimeAPI",
+ "predictor_path": "dummy_predictor",
+ "pod": {
+ "containers": [
+ {
+ "config": {"geo": "cz", "model_name": "CoolModel", "version": "000000-000000"},
+ "env": {
+ "SECRET_ENV_VAR": "secret",
+ },
+ "compute": {"cpu": '200m', "mem": f"{0.1}Gi"},
+ }
+ ],
+ },
+ }
+
+# Deploy
+with cortex.deploy_temporarily(
+ deployment,
+ deploy_dir="dummy_dir",
+ api_timeout_sec=30 * 60,
+ verbose=True,
+) as get_result:
+ # Predict
+ response = post(get_result.endpoint, json={}).json()
+```
+
+### Required changes for projects using Cortex version <= 0.34
+ - optionally add `main.py` to the root of your cortex deployment folder
+ - if there is no `main.py` in the root of the deployment folder, the default one from `resources/main.py` will be used
+ - restructure your deployment dict to look like the one in `example.py`
+
+### Garbage API Collection
+Garbage API collection auto-removes forgotten APIs to reduce costs.
+
+Each deployed API has a timeout period configured during deployment after which it definitely should not exist in the cluster anymore.
+This timeout is stored in a Postgres database table.
+Cortex client periodically checks currently deployed APIs and removes expired APIs from the cluster.
+
+### Can You Rollback?
+How do you deal with new model failure in production?
+Do you have the ability to return to your model's previous working version?
+There is no generic solution for everybody.
+But you can implement the best suiting your needs using the Python API for Cortex.
+Having a plan B is a good idea.
+
+## Our Use Case
+We use this project to automate deployment to auto-scalable AWS instances.
+The deployment management is part of application-specific Flask applications,
+which call to Python-Cortex-Serving-Client to command environment-dedicated Cortex cluster.
+
+In cases where multiple environments share a single cluster, a shared Cortex database Postgres instance is required.
+
+Read more about our use case in [Cortex Client release blog post](https://medium.com/@aiteamglami/serve-your-ml-models-in-aws-using-python-9908a4127a13).
+Or you can [watch a video about our use case](https://youtu.be/aU95dBAspr0?t=1164).
+
+## Get Started
+This tutorial will help you to get [the basic example](/example/example.py) running under 15 minutes.
+
+### Pre-requisites
+- Linux OS
+- Docker
+- Postgres
+
+
+### Setup Database
+Follow instructions below to configure local database,
+or configure cluster database,
+and re-configure db in [the example script](/example/example.py).
+
+```bash
+sudo su postgres;
+psql postgres postgres;
+create database cortex_test;
+create role cortex_test login password 'cortex_test';
+grant all privileges on database cortex_test to cortex_test;
+```
+
+You may need to configure also
+```bash
+
+vi /etc/postgresql/11/main/pg_hba.conf
+# change a matching line into following to allow localhost network access
+# host all all 127.0.0.1/32 trust
+
+sudo systemctl restart postgresql;
+```
+
+### Install Cortex
+Supported [Cortex.dev](https://cortex.dev) version is a Python dependency version installed through `requirements.txt`.
+
+Cortex requires having [Docker](https://docs.docker.com/get-docker/) installed on your machine.
+
+### Deploy Your First Model
+
+The deployment and prediction example resides in [the example script](/example/example.py).
+Make sure you have created a virtual environment, and installed requirements in `requirements.txt` and `requirements-dev.txt`,
+before you execute it.
+
+## Contact Us
+[Submit an issue](https://github.com/glami/cortex-serving-client/issues) or a pull request if you have any problems or need an extra feature.
+
+
+
+
+%package -n python3-cortex-serving-client
+Summary: Cortex.dev ML Serving Client for Python with garbage API collection.
+Provides: python-cortex-serving-client
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-cortex-serving-client
+# Cortex Serving Client
+
+<img src="https://raw.githubusercontent.com/glami/cortex-serving-client/master/cortex-serving-client-logo-2.svg" alt="Cortex Serving Client" style="max-width: 200px">
+
+
+## Warning: Cortex Labs joined Databricks and maintenance may end. Consider migrating to other tools.
+
+Cortex Serving Client makes Python serving automation simple.
+It is a Python wrapper around [Cortex's command-line client](https://cortex.dev) that provides garbage API collection.
+Cortex has [official Python client now](https://pypi.org/project/cortex/) ([source](https://github.com/cortexlabs/cortex/blob/e22985f8516fe8db930aaecd05269da99d5e7a93/pkg/cortex/client/cortex/client.py)), but this project offers advanced features (GC, temporary deployment, timeouts) not present in the vanilla.
+
+Main feature of this package is that you can use it on top of your codebase created for Cortex Version <= 0.34, meaning that:
+ - deployment directory is automatically zipped and uploaded to S3 bucket
+ - we prepared a base docker image that downloads this zipped code, unzips it and runs it in an Uvicorn worker
+ - => you can simply deploy your `PythonPredictor` using Cortex 0.42 without having to wrap it inside your own docker image
+
+Additional features:
+ - Automate your Cortex AWS cluster from Python.
+ - Prevent accidental charges by auto-removing deployments that exceeded a timeout.
+ - Execute operations: deploy, delete, get, get all.
+ - Stream remote logs into the local log with thread name set to the API name.
+ - Supported Cortex Version: 0.40.0 (See requirements.txt)
+
+Here is [a video about the package (version for Cortex 0.33 = before the big changes)](https://youtu.be/aU95dBAspr0?t=510).
+
+## How Does It Work?
+
+After implementing your predictor module in a folder (see `example/dummy_dir`),
+you can deploy it to your Cortex cluster,
+and execute a prediction via a POST request.
+
+Here is [a video of the demo below](https://youtu.be/aU95dBAspr0?t=1261).
+
+### Working Example
+Below is a snippet from [example.py](/example/example.py):
+
+The deployment dict has these additional fields compared to Cortex docs:
+ - `"project_name":<string>` in deployment root
+ - name of the project, zipped source code is going to be uploaded to S3 path: `<project_name>/<api_name>.zip`
+ - `predictor_path`: Module containing your predictor: `cls.__module__` = e.g. `predictors.my_predictor`
+ - Optional `predictor_class_name`: `cls.__name__` of your predictor class, default is `PythonPredictor`
+ - `"config":<dict>` in `container` specification
+ - config dict that will be saved to `predictor_config.json` in the root of deployment dir
+ - this file can then be loaded in `main.py` and passed to the PythonPredictor constructor = can be seen in `resources/main.py`
+
+
+```python
+deployment = {
+ "name": "dummy-a",
+ "project_name": "test",
+ "kind": "RealtimeAPI",
+ "predictor_path": "dummy_predictor",
+ "pod": {
+ "containers": [
+ {
+ "config": {"geo": "cz", "model_name": "CoolModel", "version": "000000-000000"},
+ "env": {
+ "SECRET_ENV_VAR": "secret",
+ },
+ "compute": {"cpu": '200m', "mem": f"{0.1}Gi"},
+ }
+ ],
+ },
+ }
+
+# Deploy
+with cortex.deploy_temporarily(
+ deployment,
+ deploy_dir="dummy_dir",
+ api_timeout_sec=30 * 60,
+ verbose=True,
+) as get_result:
+ # Predict
+ response = post(get_result.endpoint, json={}).json()
+```
+
+### Required changes for projects using Cortex version <= 0.34
+ - optionally add `main.py` to the root of your cortex deployment folder
+ - if there is no `main.py` in the root of the deployment folder, the default one from `resources/main.py` will be used
+ - restructure your deployment dict to look like the one in `example.py`
+
+### Garbage API Collection
+Garbage API collection auto-removes forgotten APIs to reduce costs.
+
+Each deployed API has a timeout period configured during deployment after which it definitely should not exist in the cluster anymore.
+This timeout is stored in a Postgres database table.
+Cortex client periodically checks currently deployed APIs and removes expired APIs from the cluster.
+
+### Can You Rollback?
+How do you deal with new model failure in production?
+Do you have the ability to return to your model's previous working version?
+There is no generic solution for everybody.
+But you can implement the best suiting your needs using the Python API for Cortex.
+Having a plan B is a good idea.
+
+## Our Use Case
+We use this project to automate deployment to auto-scalable AWS instances.
+The deployment management is part of application-specific Flask applications,
+which call to Python-Cortex-Serving-Client to command environment-dedicated Cortex cluster.
+
+In cases where multiple environments share a single cluster, a shared Cortex database Postgres instance is required.
+
+Read more about our use case in [Cortex Client release blog post](https://medium.com/@aiteamglami/serve-your-ml-models-in-aws-using-python-9908a4127a13).
+Or you can [watch a video about our use case](https://youtu.be/aU95dBAspr0?t=1164).
+
+## Get Started
+This tutorial will help you to get [the basic example](/example/example.py) running under 15 minutes.
+
+### Pre-requisites
+- Linux OS
+- Docker
+- Postgres
+
+
+### Setup Database
+Follow instructions below to configure local database,
+or configure cluster database,
+and re-configure db in [the example script](/example/example.py).
+
+```bash
+sudo su postgres;
+psql postgres postgres;
+create database cortex_test;
+create role cortex_test login password 'cortex_test';
+grant all privileges on database cortex_test to cortex_test;
+```
+
+You may need to configure also
+```bash
+
+vi /etc/postgresql/11/main/pg_hba.conf
+# change a matching line into following to allow localhost network access
+# host all all 127.0.0.1/32 trust
+
+sudo systemctl restart postgresql;
+```
+
+### Install Cortex
+Supported [Cortex.dev](https://cortex.dev) version is a Python dependency version installed through `requirements.txt`.
+
+Cortex requires having [Docker](https://docs.docker.com/get-docker/) installed on your machine.
+
+### Deploy Your First Model
+
+The deployment and prediction example resides in [the example script](/example/example.py).
+Make sure you have created a virtual environment, and installed requirements in `requirements.txt` and `requirements-dev.txt`,
+before you execute it.
+
+## Contact Us
+[Submit an issue](https://github.com/glami/cortex-serving-client/issues) or a pull request if you have any problems or need an extra feature.
+
+
+
+
+%package help
+Summary: Development documents and examples for cortex-serving-client
+Provides: python3-cortex-serving-client-doc
+%description help
+# Cortex Serving Client
+
+<img src="https://raw.githubusercontent.com/glami/cortex-serving-client/master/cortex-serving-client-logo-2.svg" alt="Cortex Serving Client" style="max-width: 200px">
+
+
+## Warning: Cortex Labs joined Databricks and maintenance may end. Consider migrating to other tools.
+
+Cortex Serving Client makes Python serving automation simple.
+It is a Python wrapper around [Cortex's command-line client](https://cortex.dev) that provides garbage API collection.
+Cortex has [official Python client now](https://pypi.org/project/cortex/) ([source](https://github.com/cortexlabs/cortex/blob/e22985f8516fe8db930aaecd05269da99d5e7a93/pkg/cortex/client/cortex/client.py)), but this project offers advanced features (GC, temporary deployment, timeouts) not present in the vanilla.
+
+Main feature of this package is that you can use it on top of your codebase created for Cortex Version <= 0.34, meaning that:
+ - deployment directory is automatically zipped and uploaded to S3 bucket
+ - we prepared a base docker image that downloads this zipped code, unzips it and runs it in an Uvicorn worker
+ - => you can simply deploy your `PythonPredictor` using Cortex 0.42 without having to wrap it inside your own docker image
+
+Additional features:
+ - Automate your Cortex AWS cluster from Python.
+ - Prevent accidental charges by auto-removing deployments that exceeded a timeout.
+ - Execute operations: deploy, delete, get, get all.
+ - Stream remote logs into the local log with thread name set to the API name.
+ - Supported Cortex Version: 0.40.0 (See requirements.txt)
+
+Here is [a video about the package (version for Cortex 0.33 = before the big changes)](https://youtu.be/aU95dBAspr0?t=510).
+
+## How Does It Work?
+
+After implementing your predictor module in a folder (see `example/dummy_dir`),
+you can deploy it to your Cortex cluster,
+and execute a prediction via a POST request.
+
+Here is [a video of the demo below](https://youtu.be/aU95dBAspr0?t=1261).
+
+### Working Example
+Below is a snippet from [example.py](/example/example.py):
+
+The deployment dict has these additional fields compared to Cortex docs:
+ - `"project_name":<string>` in deployment root
+ - name of the project, zipped source code is going to be uploaded to S3 path: `<project_name>/<api_name>.zip`
+ - `predictor_path`: Module containing your predictor: `cls.__module__` = e.g. `predictors.my_predictor`
+ - Optional `predictor_class_name`: `cls.__name__` of your predictor class, default is `PythonPredictor`
+ - `"config":<dict>` in `container` specification
+ - config dict that will be saved to `predictor_config.json` in the root of deployment dir
+ - this file can then be loaded in `main.py` and passed to the PythonPredictor constructor = can be seen in `resources/main.py`
+
+
+```python
+deployment = {
+ "name": "dummy-a",
+ "project_name": "test",
+ "kind": "RealtimeAPI",
+ "predictor_path": "dummy_predictor",
+ "pod": {
+ "containers": [
+ {
+ "config": {"geo": "cz", "model_name": "CoolModel", "version": "000000-000000"},
+ "env": {
+ "SECRET_ENV_VAR": "secret",
+ },
+ "compute": {"cpu": '200m', "mem": f"{0.1}Gi"},
+ }
+ ],
+ },
+ }
+
+# Deploy
+with cortex.deploy_temporarily(
+ deployment,
+ deploy_dir="dummy_dir",
+ api_timeout_sec=30 * 60,
+ verbose=True,
+) as get_result:
+ # Predict
+ response = post(get_result.endpoint, json={}).json()
+```
+
+### Required changes for projects using Cortex version <= 0.34
+ - optionally add `main.py` to the root of your cortex deployment folder
+ - if there is no `main.py` in the root of the deployment folder, the default one from `resources/main.py` will be used
+ - restructure your deployment dict to look like the one in `example.py`
+
+### Garbage API Collection
+Garbage API collection auto-removes forgotten APIs to reduce costs.
+
+Each deployed API has a timeout period configured during deployment after which it definitely should not exist in the cluster anymore.
+This timeout is stored in a Postgres database table.
+Cortex client periodically checks currently deployed APIs and removes expired APIs from the cluster.
+
+### Can You Rollback?
+How do you deal with new model failure in production?
+Do you have the ability to return to your model's previous working version?
+There is no generic solution for everybody.
+But you can implement the best suiting your needs using the Python API for Cortex.
+Having a plan B is a good idea.
+
+## Our Use Case
+We use this project to automate deployment to auto-scalable AWS instances.
+The deployment management is part of application-specific Flask applications,
+which call to Python-Cortex-Serving-Client to command environment-dedicated Cortex cluster.
+
+In cases where multiple environments share a single cluster, a shared Cortex database Postgres instance is required.
+
+Read more about our use case in [Cortex Client release blog post](https://medium.com/@aiteamglami/serve-your-ml-models-in-aws-using-python-9908a4127a13).
+Or you can [watch a video about our use case](https://youtu.be/aU95dBAspr0?t=1164).
+
+## Get Started
+This tutorial will help you to get [the basic example](/example/example.py) running under 15 minutes.
+
+### Pre-requisites
+- Linux OS
+- Docker
+- Postgres
+
+
+### Setup Database
+Follow instructions below to configure local database,
+or configure cluster database,
+and re-configure db in [the example script](/example/example.py).
+
+```bash
+sudo su postgres;
+psql postgres postgres;
+create database cortex_test;
+create role cortex_test login password 'cortex_test';
+grant all privileges on database cortex_test to cortex_test;
+```
+
+You may need to configure also
+```bash
+
+vi /etc/postgresql/11/main/pg_hba.conf
+# change a matching line into following to allow localhost network access
+# host all all 127.0.0.1/32 trust
+
+sudo systemctl restart postgresql;
+```
+
+### Install Cortex
+Supported [Cortex.dev](https://cortex.dev) version is a Python dependency version installed through `requirements.txt`.
+
+Cortex requires having [Docker](https://docs.docker.com/get-docker/) installed on your machine.
+
+### Deploy Your First Model
+
+The deployment and prediction example resides in [the example script](/example/example.py).
+Make sure you have created a virtual environment, and installed requirements in `requirements.txt` and `requirements-dev.txt`,
+before you execute it.
+
+## Contact Us
+[Submit an issue](https://github.com/glami/cortex-serving-client/issues) or a pull request if you have any problems or need an extra feature.
+
+
+
+
+%prep
+%autosetup -n cortex-serving-client-0.42.1
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-cortex-serving-client -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Mon May 15 2023 Python_Bot <Python_Bot@openeuler.org> - 0.42.1-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..3c2d1bf
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+fd3c18f35ffe81ce27a4317e82355025 cortex-serving-client-0.42.1.tar.gz