summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-05-15 03:34:23 +0000
committerCoprDistGit <infra@openeuler.org>2023-05-15 03:34:23 +0000
commit34b142232bc881deb48503a6d509c6890e06d102 (patch)
tree7467046053e141f501ca80bf2648d962993bbe59
parent9e4cb1fa225edf17e1c8e0432f9d4f2be82a6ab9 (diff)
automatic import of python-mldock
-rw-r--r--.gitignore1
-rw-r--r--python-mldock.spec542
-rw-r--r--sources1
3 files changed, 544 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..00624d0 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/mldock-0.9.2.tar.gz
diff --git a/python-mldock.spec b/python-mldock.spec
new file mode 100644
index 0000000..685b2af
--- /dev/null
+++ b/python-mldock.spec
@@ -0,0 +1,542 @@
+%global _empty_manifest_terminate_build 0
+Name: python-mldock
+Version: 0.9.2
+Release: 1
+Summary: A docker tool that helps put machine learning in places that empower ml developers
+License: MIT License
+URL: https://github.com/mldock/mldock
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/04/a7/290fd243f44dc171733c89685bb79b6da0497f303350c30068be6156ae64/mldock-0.9.2.tar.gz
+BuildArch: noarch
+
+Requires: python3-appdirs
+Requires: python3-future
+Requires: python3-environs
+Requires: python3-PyGithub
+Requires: python3-boto3
+Requires: python3-s3fs
+Requires: python3-pyarrow
+Requires: python3-click
+Requires: python3-clickclick
+Requires: python3-docker
+Requires: python3-future
+Requires: python3-requests
+Requires: python3-boto3
+Requires: python3-google-auth
+Requires: python3-halo
+Requires: python3-PyYAML
+Requires: python3-pygrok
+Requires: python3-gcsfs
+Requires: python3-s3fs
+Requires: python3-pyarrow
+Requires: python3-wheel
+Requires: python3-google-cloud-storage
+Requires: python3-google-api-python-client
+Requires: python3-gcsfs
+Requires: python3-pyarrow
+Requires: python3-pandas
+Requires: python3-numpy
+Requires: python3-protobuf
+Requires: python3-Pillow
+Requires: python3-sagemaker-training
+Requires: python3-responses
+Requires: python3-dataclasses
+
+%description
+# MLDOCK
+A docker tool that helps put machine learning in places that empower ml developers
+
+![PyPI](https://img.shields.io/pypi/v/mldock)
+[![CI](https://github.com/mldock/mldock/actions/workflows/ci.yml/badge.svg)](https://github.com/mldock/mldock/actions/workflows/ci.yml)
+[![Upload Python Package](https://github.com/mldock/mldock/actions/workflows/python-publish.yml/badge.svg)](https://github.com/mldock/mldock/actions/workflows/python-publish.yml)
+
+![mldock header](https://raw.githubusercontent.com/mldock/mldock/main/images/mldock-twitter-header.png)
+
+## What is MLDOCK?
+MLDOCK builds in conveniences and the power of docker and frames it around the core machine learning tasks related to production.
+
+As a tool this means MLDOCK's goals are:
+- Provide tooling to improve the ML development workflow. ✅
+- Enable portability of ml code betwen platforms and vendors (Sagemaker, AI Platform, Kubernetes, other container services). ✅
+- Lower the barrier to entry by developing containers from templates. ✅
+- Be ready out the box, using templates to get you started quickly. Bring only your code. ✅
+- For any ML frameworks, runs in any orchestrator and on any cloud. (as long as it integrates with docker) ✅
+
+What it is not:
+- Service orchestrator ❌
+- Training Scheduler ❌
+- Hyperparameter tuner ❌
+- Experiment Tracking ❌
+
+Inspired by [Sagify](https://github.com/Kenza-AI/sagify), [Sagemaker Training Toolkit](https://github.com/aws/sagemaker-training-toolkit) and [Amazon Sagemaker](https://aws.amazon.com/sagemaker/).
+
+## Getting Started
+
+## Set up your environment
+
+1. (Optional) Use virtual environment to manage dependencies.
+2. Install `dotenv` easily configure environment.
+
+```
+pip install --user python-dotenv[cli]
+```
+note: dotenv allows configuring of environment through the `.env` file. MLDOCK uses ENVIRONMENT VARIABLES in the environment to find your `DOCKER_HOST`, `DOCKERHUB` credentials and even `AWS/GCP` credentials.
+
+3. Create an .env with the following:
+
+``` .env
+
+# for windows and if you are using WSL1
+DOCKER_HOST=tcp://127.0.0.1
+
+# for WSL2 and linux (this is default and should work out of the box)
+# but for consistency, set this dockerhost
+
+DOCKER_HOST=unix://var/run/docker.sock
+```
+
+note: Now to switch environments just use dotenv as follows:
+
+```
+dotenv -f "/path/to/.env" run mldock local build --dir <my-project-path>
+```
+
+## Overview of MLDOCK command line
+
+The MLDOCK command line utility provides a set of commands to streamline the machine learning container image development process.
+The commands are grouped in to 3 functionality sets, namely:
+
+| Command Group | Description |
+| ------------- |:-------------:|
+| container | A set of commands that support creating new containers, initialize and update containers. Also, provides commands for created new MLDOCK supported templates from previously built container images. |
+| local | A set of commands to use during the development phase. Creating your trainer, prediction scripts and debugging the execution of scripts.|
+| registry | A set of tools to help you push, pull and interact with image registries.|
+
+
+
+## Create your first container image project
+1. Install MLDOCK
+
+The pip install is the only supported package manager at present. It is recommended that you use an environment manager, either virtualenv or conda will work.
+
+```bash
+pip install mldock[cli]
+```
+
+2. Setup local config for the mldock cli
+
+This command sets up mldock cli with some nice to have defaults. It may even prompt you for some set up.
+
+``` bash
+mldock configure init
+```
+
+3. Initialize or create your first container
+
+You will see a some of prompts to set up container.
+
+```bash
+mldock project init --dir my_ml_container
+```
+note:
+- Just hit Return/Enter to accept all the defaults.
+
+4. Build your container image locally
+
+```bash
+mldock local build --dir my_ml_container
+```
+
+5. Run your training locally
+
+```bash
+mldock local train --dir my_ml_container
+```
+
+6. Run your training locally
+
+```bash
+mldock local deploy --dir my_ml_container
+```
+
+## Putting your model in the cloud
+
+#### Push to Dockerhub
+
+1. Add the following to `.env`
+
+```
+DOCKERHUB_USERNAME=<your/user/name>
+DOCKERHUB_PASSWORD=<your/dockerhub/password>
+DOCKERHUB_REGISTRY=https://index.docker.io/v1/
+DOCKERHUB_REPO=<your/user/repo/name>
+```
+
+2. Push your container to dockerhub
+
+```bash
+mldock registry push --dir my_ml_container --provider dockerhub --build
+```
+
+note: The flags allow you to stipulate configuration changes in the command.
+`--build` says build the image before pushing. This is required initially since the dockerhub registry will prefix your container name. `--provider` tells MLDOCK to authenticate to dockerhub and push the container there.
+
+**hint** In addition to `DockerHub`, both `AWS ECR` & `GCP GCR` are also supported.
+
+## helpful tips
+
+- docker compose sees my files as directories in mounted volume - *USE "./path/to/file" format* | https://stackoverflow.com/questions/42248198/how-to-mount-a-single-file-in-a-volume
+- simlinks from my container have broken permissions in WSL2 | https://github.com/microsoft/WSL/issues/1475
+
+
+
+
+%package -n python3-mldock
+Summary: A docker tool that helps put machine learning in places that empower ml developers
+Provides: python-mldock
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-mldock
+# MLDOCK
+A docker tool that helps put machine learning in places that empower ml developers
+
+![PyPI](https://img.shields.io/pypi/v/mldock)
+[![CI](https://github.com/mldock/mldock/actions/workflows/ci.yml/badge.svg)](https://github.com/mldock/mldock/actions/workflows/ci.yml)
+[![Upload Python Package](https://github.com/mldock/mldock/actions/workflows/python-publish.yml/badge.svg)](https://github.com/mldock/mldock/actions/workflows/python-publish.yml)
+
+![mldock header](https://raw.githubusercontent.com/mldock/mldock/main/images/mldock-twitter-header.png)
+
+## What is MLDOCK?
+MLDOCK builds in conveniences and the power of docker and frames it around the core machine learning tasks related to production.
+
+As a tool this means MLDOCK's goals are:
+- Provide tooling to improve the ML development workflow. ✅
+- Enable portability of ml code betwen platforms and vendors (Sagemaker, AI Platform, Kubernetes, other container services). ✅
+- Lower the barrier to entry by developing containers from templates. ✅
+- Be ready out the box, using templates to get you started quickly. Bring only your code. ✅
+- For any ML frameworks, runs in any orchestrator and on any cloud. (as long as it integrates with docker) ✅
+
+What it is not:
+- Service orchestrator ❌
+- Training Scheduler ❌
+- Hyperparameter tuner ❌
+- Experiment Tracking ❌
+
+Inspired by [Sagify](https://github.com/Kenza-AI/sagify), [Sagemaker Training Toolkit](https://github.com/aws/sagemaker-training-toolkit) and [Amazon Sagemaker](https://aws.amazon.com/sagemaker/).
+
+## Getting Started
+
+## Set up your environment
+
+1. (Optional) Use virtual environment to manage dependencies.
+2. Install `dotenv` easily configure environment.
+
+```
+pip install --user python-dotenv[cli]
+```
+note: dotenv allows configuring of environment through the `.env` file. MLDOCK uses ENVIRONMENT VARIABLES in the environment to find your `DOCKER_HOST`, `DOCKERHUB` credentials and even `AWS/GCP` credentials.
+
+3. Create an .env with the following:
+
+``` .env
+
+# for windows and if you are using WSL1
+DOCKER_HOST=tcp://127.0.0.1
+
+# for WSL2 and linux (this is default and should work out of the box)
+# but for consistency, set this dockerhost
+
+DOCKER_HOST=unix://var/run/docker.sock
+```
+
+note: Now to switch environments just use dotenv as follows:
+
+```
+dotenv -f "/path/to/.env" run mldock local build --dir <my-project-path>
+```
+
+## Overview of MLDOCK command line
+
+The MLDOCK command line utility provides a set of commands to streamline the machine learning container image development process.
+The commands are grouped in to 3 functionality sets, namely:
+
+| Command Group | Description |
+| ------------- |:-------------:|
+| container | A set of commands that support creating new containers, initialize and update containers. Also, provides commands for created new MLDOCK supported templates from previously built container images. |
+| local | A set of commands to use during the development phase. Creating your trainer, prediction scripts and debugging the execution of scripts.|
+| registry | A set of tools to help you push, pull and interact with image registries.|
+
+
+
+## Create your first container image project
+1. Install MLDOCK
+
+The pip install is the only supported package manager at present. It is recommended that you use an environment manager, either virtualenv or conda will work.
+
+```bash
+pip install mldock[cli]
+```
+
+2. Setup local config for the mldock cli
+
+This command sets up mldock cli with some nice to have defaults. It may even prompt you for some set up.
+
+``` bash
+mldock configure init
+```
+
+3. Initialize or create your first container
+
+You will see a some of prompts to set up container.
+
+```bash
+mldock project init --dir my_ml_container
+```
+note:
+- Just hit Return/Enter to accept all the defaults.
+
+4. Build your container image locally
+
+```bash
+mldock local build --dir my_ml_container
+```
+
+5. Run your training locally
+
+```bash
+mldock local train --dir my_ml_container
+```
+
+6. Run your training locally
+
+```bash
+mldock local deploy --dir my_ml_container
+```
+
+## Putting your model in the cloud
+
+#### Push to Dockerhub
+
+1. Add the following to `.env`
+
+```
+DOCKERHUB_USERNAME=<your/user/name>
+DOCKERHUB_PASSWORD=<your/dockerhub/password>
+DOCKERHUB_REGISTRY=https://index.docker.io/v1/
+DOCKERHUB_REPO=<your/user/repo/name>
+```
+
+2. Push your container to dockerhub
+
+```bash
+mldock registry push --dir my_ml_container --provider dockerhub --build
+```
+
+note: The flags allow you to stipulate configuration changes in the command.
+`--build` says build the image before pushing. This is required initially since the dockerhub registry will prefix your container name. `--provider` tells MLDOCK to authenticate to dockerhub and push the container there.
+
+**hint** In addition to `DockerHub`, both `AWS ECR` & `GCP GCR` are also supported.
+
+## helpful tips
+
+- docker compose sees my files as directories in mounted volume - *USE "./path/to/file" format* | https://stackoverflow.com/questions/42248198/how-to-mount-a-single-file-in-a-volume
+- simlinks from my container have broken permissions in WSL2 | https://github.com/microsoft/WSL/issues/1475
+
+
+
+
+%package help
+Summary: Development documents and examples for mldock
+Provides: python3-mldock-doc
+%description help
+# MLDOCK
+A docker tool that helps put machine learning in places that empower ml developers
+
+![PyPI](https://img.shields.io/pypi/v/mldock)
+[![CI](https://github.com/mldock/mldock/actions/workflows/ci.yml/badge.svg)](https://github.com/mldock/mldock/actions/workflows/ci.yml)
+[![Upload Python Package](https://github.com/mldock/mldock/actions/workflows/python-publish.yml/badge.svg)](https://github.com/mldock/mldock/actions/workflows/python-publish.yml)
+
+![mldock header](https://raw.githubusercontent.com/mldock/mldock/main/images/mldock-twitter-header.png)
+
+## What is MLDOCK?
+MLDOCK builds in conveniences and the power of docker and frames it around the core machine learning tasks related to production.
+
+As a tool this means MLDOCK's goals are:
+- Provide tooling to improve the ML development workflow. ✅
+- Enable portability of ml code betwen platforms and vendors (Sagemaker, AI Platform, Kubernetes, other container services). ✅
+- Lower the barrier to entry by developing containers from templates. ✅
+- Be ready out the box, using templates to get you started quickly. Bring only your code. ✅
+- For any ML frameworks, runs in any orchestrator and on any cloud. (as long as it integrates with docker) ✅
+
+What it is not:
+- Service orchestrator ❌
+- Training Scheduler ❌
+- Hyperparameter tuner ❌
+- Experiment Tracking ❌
+
+Inspired by [Sagify](https://github.com/Kenza-AI/sagify), [Sagemaker Training Toolkit](https://github.com/aws/sagemaker-training-toolkit) and [Amazon Sagemaker](https://aws.amazon.com/sagemaker/).
+
+## Getting Started
+
+## Set up your environment
+
+1. (Optional) Use virtual environment to manage dependencies.
+2. Install `dotenv` easily configure environment.
+
+```
+pip install --user python-dotenv[cli]
+```
+note: dotenv allows configuring of environment through the `.env` file. MLDOCK uses ENVIRONMENT VARIABLES in the environment to find your `DOCKER_HOST`, `DOCKERHUB` credentials and even `AWS/GCP` credentials.
+
+3. Create an .env with the following:
+
+``` .env
+
+# for windows and if you are using WSL1
+DOCKER_HOST=tcp://127.0.0.1
+
+# for WSL2 and linux (this is default and should work out of the box)
+# but for consistency, set this dockerhost
+
+DOCKER_HOST=unix://var/run/docker.sock
+```
+
+note: Now to switch environments just use dotenv as follows:
+
+```
+dotenv -f "/path/to/.env" run mldock local build --dir <my-project-path>
+```
+
+## Overview of MLDOCK command line
+
+The MLDOCK command line utility provides a set of commands to streamline the machine learning container image development process.
+The commands are grouped in to 3 functionality sets, namely:
+
+| Command Group | Description |
+| ------------- |:-------------:|
+| container | A set of commands that support creating new containers, initialize and update containers. Also, provides commands for created new MLDOCK supported templates from previously built container images. |
+| local | A set of commands to use during the development phase. Creating your trainer, prediction scripts and debugging the execution of scripts.|
+| registry | A set of tools to help you push, pull and interact with image registries.|
+
+
+
+## Create your first container image project
+1. Install MLDOCK
+
+The pip install is the only supported package manager at present. It is recommended that you use an environment manager, either virtualenv or conda will work.
+
+```bash
+pip install mldock[cli]
+```
+
+2. Setup local config for the mldock cli
+
+This command sets up mldock cli with some nice to have defaults. It may even prompt you for some set up.
+
+``` bash
+mldock configure init
+```
+
+3. Initialize or create your first container
+
+You will see a some of prompts to set up container.
+
+```bash
+mldock project init --dir my_ml_container
+```
+note:
+- Just hit Return/Enter to accept all the defaults.
+
+4. Build your container image locally
+
+```bash
+mldock local build --dir my_ml_container
+```
+
+5. Run your training locally
+
+```bash
+mldock local train --dir my_ml_container
+```
+
+6. Run your training locally
+
+```bash
+mldock local deploy --dir my_ml_container
+```
+
+## Putting your model in the cloud
+
+#### Push to Dockerhub
+
+1. Add the following to `.env`
+
+```
+DOCKERHUB_USERNAME=<your/user/name>
+DOCKERHUB_PASSWORD=<your/dockerhub/password>
+DOCKERHUB_REGISTRY=https://index.docker.io/v1/
+DOCKERHUB_REPO=<your/user/repo/name>
+```
+
+2. Push your container to dockerhub
+
+```bash
+mldock registry push --dir my_ml_container --provider dockerhub --build
+```
+
+note: The flags allow you to stipulate configuration changes in the command.
+`--build` says build the image before pushing. This is required initially since the dockerhub registry will prefix your container name. `--provider` tells MLDOCK to authenticate to dockerhub and push the container there.
+
+**hint** In addition to `DockerHub`, both `AWS ECR` & `GCP GCR` are also supported.
+
+## helpful tips
+
+- docker compose sees my files as directories in mounted volume - *USE "./path/to/file" format* | https://stackoverflow.com/questions/42248198/how-to-mount-a-single-file-in-a-volume
+- simlinks from my container have broken permissions in WSL2 | https://github.com/microsoft/WSL/issues/1475
+
+
+
+
+%prep
+%autosetup -n mldock-0.9.2
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-mldock -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Mon May 15 2023 Python_Bot <Python_Bot@openeuler.org> - 0.9.2-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..113f24f
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+ae8d71fe0344e120314a360000e225d6 mldock-0.9.2.tar.gz