summaryrefslogtreecommitdiff
path: root/python-pypmml-spark.spec
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-05-05 13:05:11 +0000
committerCoprDistGit <infra@openeuler.org>2023-05-05 13:05:11 +0000
commit0b4a618319dcc2c1ae5ebc8cb8a426cca73092e3 (patch)
treec4871530291e5f0f3ea4fe04319a4dd0c8286a86 /python-pypmml-spark.spec
parent7e4c6d1472df3113855e6de30e483a9ad4a6abf0 (diff)
automatic import of python-pypmml-sparkopeneuler20.03
Diffstat (limited to 'python-pypmml-spark.spec')
-rw-r--r--python-pypmml-spark.spec201
1 files changed, 201 insertions, 0 deletions
diff --git a/python-pypmml-spark.spec b/python-pypmml-spark.spec
new file mode 100644
index 0000000..4621a4b
--- /dev/null
+++ b/python-pypmml-spark.spec
@@ -0,0 +1,201 @@
+%global _empty_manifest_terminate_build 0
+Name: python-pypmml-spark
+Version: 0.9.16
+Release: 1
+Summary: Python PMML scoring library for PySpark as SparkML Transformer
+License: Apache License 2.0
+URL: https://github.com/autodeployai/pypmml-spark
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/93/18/a32277826a25ed1f8acc429d80e034d00c763066919118179433c738ea68/pypmml-spark-0.9.16.tar.gz
+BuildArch: noarch
+
+
+%description
+[pypmml-spark](https://github.com/autodeployai/pypmml-spark/tree/master) | PySpark >= 3.0.0
+[pypmml-spark2](https://github.com/autodeployai/pypmml-spark/tree/spark-2.x) | PySpark >= 2.4.0, < 3.0.0
+## Installation
+```bash
+pip install pypmml-spark
+```
+Or install the latest version from github:
+```bash
+pip install --upgrade git+https://github.com/autodeployai/pypmml-spark.git
+```
+After that, you need to do more to use it in Spark that must know those jars in the package `pypmml_spark.jars`. There are several ways to do that:
+1. The easiest way is to run the script `link_pmml4s_jars_into_spark.py` that is delivered with `pypmml-spark`:
+ ```bash
+ link_pmml4s_jars_into_spark.py
+ ```
+2. Use those config options to specify dependent jars properly. e.g. `--jars`, or `spark.executor.extraClassPath` and `spark.executor.extraClassPath`. See [Spark](http://spark.apache.org/docs/latest/configuration.html) for details about those parameters.
+## Usage
+1. Load model from various sources, e.g. filename, string, or array of bytes.
+ ```python
+ from pypmml_spark import ScoreModel
+ # The model is from http://dmg.org/pmml/pmml_examples/KNIME_PMML_4.1_Examples/single_iris_dectree.xml
+ model = ScoreModel.fromFile('single_iris_dectree.xml')
+ ```
+2. Call `transform(dataset)` to run a batch score against an input dataset.
+ ```python
+ # The data is from http://dmg.org/pmml/pmml_examples/Iris.csv
+ df = spark.read.csv('Iris.csv', header='true')
+ score_df = model.transform(df)
+ ```
+## Use PMML in Scala or Java
+See the [PMML4S](https://github.com/autodeployai/pmml4s) project. _PMML4S_ is a PMML scoring library for Scala. It provides both Scala and Java Evaluator API for PMML.
+## Use PMML in Python
+See the [PyPMML](https://github.com/autodeployai/pypmml) project. _PyPMML_ is a Python PMML scoring library, it really is the Python API for PMML4S.
+## Use PMML in Spark
+See the [PMML4S-Spark](https://github.com/autodeployai/pmml4s-spark) project. _PMML4S-Spark_ is a PMML scoring library for Spark as SparkML Transformer.
+## Deploy PMML as REST API
+See the [AI-Serving](https://github.com/autodeployai/ai-serving) project. _AI-Serving_ is serving AI/ML models in the open standard formats PMML and ONNX with both HTTP (REST API) and gRPC endpoints.
+## Deploy and Manage AI/ML models at scale
+See the [DaaS](https://www.autodeploy.ai/) system that deploys AI/ML models in production at scale on Kubernetes.
+## Support
+If you have any questions about the _PyPMML-Spark_ library, please open issues on this repository.
+Feedback and contributions to the project, no matter what kind, are always very welcome.
+## License
+_PyPMML-Spark_ is licensed under [APL 2.0](http://www.apache.org/licenses/LICENSE-2.0).
+
+%package -n python3-pypmml-spark
+Summary: Python PMML scoring library for PySpark as SparkML Transformer
+Provides: python-pypmml-spark
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-pypmml-spark
+[pypmml-spark](https://github.com/autodeployai/pypmml-spark/tree/master) | PySpark >= 3.0.0
+[pypmml-spark2](https://github.com/autodeployai/pypmml-spark/tree/spark-2.x) | PySpark >= 2.4.0, < 3.0.0
+## Installation
+```bash
+pip install pypmml-spark
+```
+Or install the latest version from github:
+```bash
+pip install --upgrade git+https://github.com/autodeployai/pypmml-spark.git
+```
+After that, you need to do more to use it in Spark that must know those jars in the package `pypmml_spark.jars`. There are several ways to do that:
+1. The easiest way is to run the script `link_pmml4s_jars_into_spark.py` that is delivered with `pypmml-spark`:
+ ```bash
+ link_pmml4s_jars_into_spark.py
+ ```
+2. Use those config options to specify dependent jars properly. e.g. `--jars`, or `spark.executor.extraClassPath` and `spark.executor.extraClassPath`. See [Spark](http://spark.apache.org/docs/latest/configuration.html) for details about those parameters.
+## Usage
+1. Load model from various sources, e.g. filename, string, or array of bytes.
+ ```python
+ from pypmml_spark import ScoreModel
+ # The model is from http://dmg.org/pmml/pmml_examples/KNIME_PMML_4.1_Examples/single_iris_dectree.xml
+ model = ScoreModel.fromFile('single_iris_dectree.xml')
+ ```
+2. Call `transform(dataset)` to run a batch score against an input dataset.
+ ```python
+ # The data is from http://dmg.org/pmml/pmml_examples/Iris.csv
+ df = spark.read.csv('Iris.csv', header='true')
+ score_df = model.transform(df)
+ ```
+## Use PMML in Scala or Java
+See the [PMML4S](https://github.com/autodeployai/pmml4s) project. _PMML4S_ is a PMML scoring library for Scala. It provides both Scala and Java Evaluator API for PMML.
+## Use PMML in Python
+See the [PyPMML](https://github.com/autodeployai/pypmml) project. _PyPMML_ is a Python PMML scoring library, it really is the Python API for PMML4S.
+## Use PMML in Spark
+See the [PMML4S-Spark](https://github.com/autodeployai/pmml4s-spark) project. _PMML4S-Spark_ is a PMML scoring library for Spark as SparkML Transformer.
+## Deploy PMML as REST API
+See the [AI-Serving](https://github.com/autodeployai/ai-serving) project. _AI-Serving_ is serving AI/ML models in the open standard formats PMML and ONNX with both HTTP (REST API) and gRPC endpoints.
+## Deploy and Manage AI/ML models at scale
+See the [DaaS](https://www.autodeploy.ai/) system that deploys AI/ML models in production at scale on Kubernetes.
+## Support
+If you have any questions about the _PyPMML-Spark_ library, please open issues on this repository.
+Feedback and contributions to the project, no matter what kind, are always very welcome.
+## License
+_PyPMML-Spark_ is licensed under [APL 2.0](http://www.apache.org/licenses/LICENSE-2.0).
+
+%package help
+Summary: Development documents and examples for pypmml-spark
+Provides: python3-pypmml-spark-doc
+%description help
+[pypmml-spark](https://github.com/autodeployai/pypmml-spark/tree/master) | PySpark >= 3.0.0
+[pypmml-spark2](https://github.com/autodeployai/pypmml-spark/tree/spark-2.x) | PySpark >= 2.4.0, < 3.0.0
+## Installation
+```bash
+pip install pypmml-spark
+```
+Or install the latest version from github:
+```bash
+pip install --upgrade git+https://github.com/autodeployai/pypmml-spark.git
+```
+After that, you need to do more to use it in Spark that must know those jars in the package `pypmml_spark.jars`. There are several ways to do that:
+1. The easiest way is to run the script `link_pmml4s_jars_into_spark.py` that is delivered with `pypmml-spark`:
+ ```bash
+ link_pmml4s_jars_into_spark.py
+ ```
+2. Use those config options to specify dependent jars properly. e.g. `--jars`, or `spark.executor.extraClassPath` and `spark.executor.extraClassPath`. See [Spark](http://spark.apache.org/docs/latest/configuration.html) for details about those parameters.
+## Usage
+1. Load model from various sources, e.g. filename, string, or array of bytes.
+ ```python
+ from pypmml_spark import ScoreModel
+ # The model is from http://dmg.org/pmml/pmml_examples/KNIME_PMML_4.1_Examples/single_iris_dectree.xml
+ model = ScoreModel.fromFile('single_iris_dectree.xml')
+ ```
+2. Call `transform(dataset)` to run a batch score against an input dataset.
+ ```python
+ # The data is from http://dmg.org/pmml/pmml_examples/Iris.csv
+ df = spark.read.csv('Iris.csv', header='true')
+ score_df = model.transform(df)
+ ```
+## Use PMML in Scala or Java
+See the [PMML4S](https://github.com/autodeployai/pmml4s) project. _PMML4S_ is a PMML scoring library for Scala. It provides both Scala and Java Evaluator API for PMML.
+## Use PMML in Python
+See the [PyPMML](https://github.com/autodeployai/pypmml) project. _PyPMML_ is a Python PMML scoring library, it really is the Python API for PMML4S.
+## Use PMML in Spark
+See the [PMML4S-Spark](https://github.com/autodeployai/pmml4s-spark) project. _PMML4S-Spark_ is a PMML scoring library for Spark as SparkML Transformer.
+## Deploy PMML as REST API
+See the [AI-Serving](https://github.com/autodeployai/ai-serving) project. _AI-Serving_ is serving AI/ML models in the open standard formats PMML and ONNX with both HTTP (REST API) and gRPC endpoints.
+## Deploy and Manage AI/ML models at scale
+See the [DaaS](https://www.autodeploy.ai/) system that deploys AI/ML models in production at scale on Kubernetes.
+## Support
+If you have any questions about the _PyPMML-Spark_ library, please open issues on this repository.
+Feedback and contributions to the project, no matter what kind, are always very welcome.
+## License
+_PyPMML-Spark_ is licensed under [APL 2.0](http://www.apache.org/licenses/LICENSE-2.0).
+
+%prep
+%autosetup -n pypmml-spark-0.9.16
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-pypmml-spark -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Fri May 05 2023 Python_Bot <Python_Bot@openeuler.org> - 0.9.16-1
+- Package Spec generated