%global _empty_manifest_terminate_build 0
Name:		python-ServeIt
Version:	0.0.9
Release:	1
Summary:	Machine learning prediction serving
License:	MIT License
URL:		https://github.com/rtlee9/serveit
Source0:	https://mirrors.nju.edu.cn/pypi/web/packages/f6/83/bda15c52f95b802f7da9165d076e6f4d9601a385b278f81fb8a2706072cb/ServeIt-0.0.9.tar.gz
BuildArch:	noarch

Requires:	python3-flask
Requires:	python3-flask-restful
Requires:	python3-meinheld
Requires:	python3-check-manifest
Requires:	python3-coverage

%description
|Build Status| |Codacy Grade Badge| |Codacy Coverage Badge| |PyPI
version|
ServeIt lets you serve model predictions and supplementary information
from a RESTful API using your favorite Python ML library in as little as
one line of code:
    from serveit.server import ModelServer
    from sklearn.linear_model import LogisticRegression
    from sklearn.datasets import load_iris
    # fit logistic regression on Iris data
    clf = LogisticRegression()
    data = load_iris()
    clf.fit(data.data, data.target)
    # initialize server with a model and start serving predictions
    ModelServer(clf, clf.predict).serve()
Your new API is now accepting ``POST`` requests at
``localhost:5000/predictions``! Please see the `examples <examples>`__
directory for detailed examples across domains (e.g., regression, image
classification), including live examples.
Features
^^^^^^^^
Current ServeIt features include:
1. Model inference serving via RESTful API endpoint
2. Extensible library for inference-time data loading, preprocessing,
   input validation, and postprocessing
3. Supplementary information endpoint creation
4. Automatic JSON serialization of responses
5. Configurable request and response logging (work in progress)
Supported libraries
^^^^^^^^^^^^^^^^^^^
The following libraries are currently supported: \* Scikit-Learn \*
Keras \* PyTorch

%package -n python3-ServeIt
Summary:	Machine learning prediction serving
Provides:	python-ServeIt
BuildRequires:	python3-devel
BuildRequires:	python3-setuptools
BuildRequires:	python3-pip
%description -n python3-ServeIt
|Build Status| |Codacy Grade Badge| |Codacy Coverage Badge| |PyPI
version|
ServeIt lets you serve model predictions and supplementary information
from a RESTful API using your favorite Python ML library in as little as
one line of code:
    from serveit.server import ModelServer
    from sklearn.linear_model import LogisticRegression
    from sklearn.datasets import load_iris
    # fit logistic regression on Iris data
    clf = LogisticRegression()
    data = load_iris()
    clf.fit(data.data, data.target)
    # initialize server with a model and start serving predictions
    ModelServer(clf, clf.predict).serve()
Your new API is now accepting ``POST`` requests at
``localhost:5000/predictions``! Please see the `examples <examples>`__
directory for detailed examples across domains (e.g., regression, image
classification), including live examples.
Features
^^^^^^^^
Current ServeIt features include:
1. Model inference serving via RESTful API endpoint
2. Extensible library for inference-time data loading, preprocessing,
   input validation, and postprocessing
3. Supplementary information endpoint creation
4. Automatic JSON serialization of responses
5. Configurable request and response logging (work in progress)
Supported libraries
^^^^^^^^^^^^^^^^^^^
The following libraries are currently supported: \* Scikit-Learn \*
Keras \* PyTorch

%package help
Summary:	Development documents and examples for ServeIt
Provides:	python3-ServeIt-doc
%description help
|Build Status| |Codacy Grade Badge| |Codacy Coverage Badge| |PyPI
version|
ServeIt lets you serve model predictions and supplementary information
from a RESTful API using your favorite Python ML library in as little as
one line of code:
    from serveit.server import ModelServer
    from sklearn.linear_model import LogisticRegression
    from sklearn.datasets import load_iris
    # fit logistic regression on Iris data
    clf = LogisticRegression()
    data = load_iris()
    clf.fit(data.data, data.target)
    # initialize server with a model and start serving predictions
    ModelServer(clf, clf.predict).serve()
Your new API is now accepting ``POST`` requests at
``localhost:5000/predictions``! Please see the `examples <examples>`__
directory for detailed examples across domains (e.g., regression, image
classification), including live examples.
Features
^^^^^^^^
Current ServeIt features include:
1. Model inference serving via RESTful API endpoint
2. Extensible library for inference-time data loading, preprocessing,
   input validation, and postprocessing
3. Supplementary information endpoint creation
4. Automatic JSON serialization of responses
5. Configurable request and response logging (work in progress)
Supported libraries
^^^^^^^^^^^^^^^^^^^
The following libraries are currently supported: \* Scikit-Learn \*
Keras \* PyTorch

%prep
%autosetup -n ServeIt-0.0.9

%build
%py3_build

%install
%py3_install
install -d -m755 %{buildroot}/%{_pkgdocdir}
if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
pushd %{buildroot}
if [ -d usr/lib ]; then
	find usr/lib -type f -printf "\"/%h/%f\"\n" >> filelist.lst
fi
if [ -d usr/lib64 ]; then
	find usr/lib64 -type f -printf "\"/%h/%f\"\n" >> filelist.lst
fi
if [ -d usr/bin ]; then
	find usr/bin -type f -printf "\"/%h/%f\"\n" >> filelist.lst
fi
if [ -d usr/sbin ]; then
	find usr/sbin -type f -printf "\"/%h/%f\"\n" >> filelist.lst
fi
touch doclist.lst
if [ -d usr/share/man ]; then
	find usr/share/man -type f -printf "\"/%h/%f.gz\"\n" >> doclist.lst
fi
popd
mv %{buildroot}/filelist.lst .
mv %{buildroot}/doclist.lst .

%files -n python3-ServeIt -f filelist.lst
%dir %{python3_sitelib}/*

%files help -f doclist.lst
%{_docdir}/*

%changelog
* Thu Jun 08 2023 Python_Bot <Python_Bot@openeuler.org> - 0.0.9-1
- Package Spec generated