summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-05-15 03:18:11 +0000
committerCoprDistGit <infra@openeuler.org>2023-05-15 03:18:11 +0000
commitf411d22e272c5233620e30ad9061c5141308c242 (patch)
treeca525f1b74d044640658a294f412842be9f4c294
parent5c285fcdd0b44a482c74415e4e0e52bbc285ecdf (diff)
automatic import of python-mydatapreprocessing
-rw-r--r--.gitignore1
-rw-r--r--python-mydatapreprocessing.spec532
-rw-r--r--sources1
3 files changed, 534 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..612e23c 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/mydatapreprocessing-3.0.3.tar.gz
diff --git a/python-mydatapreprocessing.spec b/python-mydatapreprocessing.spec
new file mode 100644
index 0000000..86c8c70
--- /dev/null
+++ b/python-mydatapreprocessing.spec
@@ -0,0 +1,532 @@
+%global _empty_manifest_terminate_build 0
+Name: python-mydatapreprocessing
+Version: 3.0.3
+Release: 1
+Summary: Library/framework for making predictions.
+License: mit
+URL: https://github.com/Malachov/mydatapreprocessing
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/63/b5/e4b0d97599501bed7d4b2a8340cff59de3caf288326fa39e9df8f1172ace/mydatapreprocessing-3.0.3.tar.gz
+BuildArch: noarch
+
+Requires: python3-mylogging
+Requires: python3-mypythontools
+Requires: python3-numpy
+Requires: python3-pandas
+Requires: python3-requests
+Requires: python3-scipy
+Requires: python3-sklearn
+Requires: python3-typing-extensions
+Requires: python3-wfdb
+Requires: python3-openpyxl
+Requires: python3-pyarrow
+Requires: python3-pyodbc
+Requires: python3-sqlalchemy
+Requires: python3-tables
+Requires: python3-xlrd
+Requires: python3-wfdb
+Requires: python3-openpyxl
+Requires: python3-pyarrow
+Requires: python3-pyodbc
+Requires: python3-sqlalchemy
+Requires: python3-tables
+Requires: python3-xlrd
+
+%description
+# mydatapreprocessing
+
+[![Python versions](https://img.shields.io/pypi/pyversions/mydatapreprocessing.svg)](https://pypi.python.org/pypi/mydatapreprocessing/) [![PyPI version](https://badge.fury.io/py/mydatapreprocessing.svg)](https://badge.fury.io/py/mydatapreprocessing) [![Downloads](https://pepy.tech/badge/mydatapreprocessing)](https://pepy.tech/project/mydatapreprocessing) [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/Malachov/mydatapreprocessing/HEAD?filepath=demo.ipynb) [![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/Malachov/mydatapreprocessing.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/Malachov/mydatapreprocessing/context:python) [![Documentation Status](https://readthedocs.org/projects/mydatapreprocessing/badge/?version=latest)](https://mydatapreprocessing.readthedocs.io/?badge=latest) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![codecov](https://codecov.io/gh/Malachov/mydatapreprocessing/branch/master/graph/badge.svg)](https://codecov.io/gh/Malachov/mydatapreprocessing)
+
+Load data from web link or local file (json, csv, Excel file, parquet, h5...), consolidate it (resample data, clean NaN values, do string embedding) derive new features via columns derivation and do preprocessing like
+standardization or smoothing. If you want to see how functions works, check it's docstrings - working examples with printed results are also in tests - visual.py.
+
+## Links
+
+[Repo on GitHub](https://github.com/Malachov/mydatapreprocessing)
+
+[Official readthedocs documentation](https://mydatapreprocessing.readthedocs.io)
+
+
+## Installation
+
+Python >=3.6 (Python 2 is not supported).
+
+Install just with
+
+```console
+pip install mydatapreprocessing
+```
+
+There are some libraries that not every user will be using (for some specific data inputs for example). If you want to be sure to have all libraries, you can provide extras requirements like.
+
+```console
+pip install mydatapreprocessing[datatypes]
+```
+
+Available extras are ["all", "datasets", "datatypes"]
+
+
+## Examples
+
+You can use live [jupyter demo on binder](https://mybinder.org/v2/gh/Malachov/mydatapreprocessing/HEAD?filepath=demo.ipynb)
+
+<!--phmdoctest-setup-->
+```python
+import mydatapreprocessing as mdp
+import pandas as pd
+import numpy as np
+```
+
+### Load data
+
+You can use:
+
+- python formats (numpy.ndarray, pd.DataFrame, list, tuple, dict)
+- local files
+- web urls
+
+Supported path formats are:
+
+- csv
+- xlsx and xls
+- json
+- parquet
+- h5
+
+You can load more data at once in list.
+
+Syntax is always the same.
+
+<!--phmdoctest-label test_load_data-->
+<!--phmdoctest-share-names-->
+```python
+data = mdp.load_data.load_data(
+ "https://raw.githubusercontent.com/jbrownlee/Datasets/master/daily-min-temperatures.csv",
+)
+# data2 = mdp.load_data.load_data([PATH_TO_FILE.csv, PATH_TO_FILE2.csv])
+```
+
+### Consolidation
+If you want to use data for some machine learning models, you will probably want to remove Nan values, convert string columns to numeric if possible, do encoding or keep only numeric data and resample.
+
+Consolidation is working with pandas DataFrame as column names matters here.
+
+There are many functions, but there is main function pipelining other functions `consolidate_data`
+
+
+<!--phmdoctest-label test_consolidation-->
+<!--phmdoctest-share-names-->
+```python
+consolidation_config = mdp.consolidation.consolidation_config.default_consolidation_config.do.copy()
+consolidation_config.datetime.datetime_column = 'Date'
+consolidation_config.resample.resample = 'M'
+consolidation_config.resample.resample_function = "mean"
+consolidation_config.dtype = 'float32'
+
+consolidated = mdp.consolidation.consolidate_data(data, consolidation_config)
+print(consolidated.head())
+```
+
+### Feature engineering
+Functions in `feature_engineering` and `preprocessing` expects that data are in form (*n_samples*, *n_features*).
+*n_samples* are usually much bigger and therefore transformed in `consolidate_data` if necessary.
+
+In config, you can use shorter update dict syntax as all values names are unique.
+
+### Feature engineering
+
+Create new columns that can be for example used as another machine learning model input.
+
+```python
+import mydatapreprocessing.feature_engineering as mdpf
+import mydatapreprocessing as mdp
+
+data = pd.DataFrame(
+ [mdp.datasets.sin(n=30), mdp.datasets.ramp(n=30)]
+).T
+
+extended = mdpf.add_derived_columns(data, differences=True, rolling_means=10)
+print(extended.columns)
+print(f"\nit has less rows then on input {len(extended)}")
+```
+
+Functions in `feature_engineering` and `preprocessing` expects that data are in form (n_samples, n_features). n_samples are usually much bigger and therefore transformed in `consolidate_data`
+if necessary.
+
+### Preprocessing
+
+Preprocessing can be used on pandas DataFrame as well as on numpy array. Column names are not important as it's just matrix with defined dtype.
+
+There is many functions, but there is main function pipelining other functions `preprocess_data` Preprocessed data can be converted back with `preprocess_data_inverse`
+
+
+<!--phmdoctest-label test_preprocess_data-->
+<!--phmdoctest-share-names-->
+```python
+
+from mydatapreprocessing import preprocessing as mdpp
+
+df = pd.DataFrame(np.array([range(5), range(20, 25), np.random.randn(5)]).astype("float32").T)
+df.iloc[2, 0] = 500
+
+config = mdpp.preprocessing_config.default_preprocessing_config.do.copy()
+config.do.update({"remove_outliers": None, "difference_transform": True, "standardize": "standardize"})
+data_preprocessed, inverse_config = mdpp.preprocess_data(df.values, config)
+inverse_config.difference_transform = df.iloc[0, 0]
+data_preprocessed_inverse = mdpp.preprocess_data_inverse(
+ data_preprocessed[:, 0], inverse_config
+)
+```
+
+
+
+
+%package -n python3-mydatapreprocessing
+Summary: Library/framework for making predictions.
+Provides: python-mydatapreprocessing
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-mydatapreprocessing
+# mydatapreprocessing
+
+[![Python versions](https://img.shields.io/pypi/pyversions/mydatapreprocessing.svg)](https://pypi.python.org/pypi/mydatapreprocessing/) [![PyPI version](https://badge.fury.io/py/mydatapreprocessing.svg)](https://badge.fury.io/py/mydatapreprocessing) [![Downloads](https://pepy.tech/badge/mydatapreprocessing)](https://pepy.tech/project/mydatapreprocessing) [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/Malachov/mydatapreprocessing/HEAD?filepath=demo.ipynb) [![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/Malachov/mydatapreprocessing.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/Malachov/mydatapreprocessing/context:python) [![Documentation Status](https://readthedocs.org/projects/mydatapreprocessing/badge/?version=latest)](https://mydatapreprocessing.readthedocs.io/?badge=latest) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![codecov](https://codecov.io/gh/Malachov/mydatapreprocessing/branch/master/graph/badge.svg)](https://codecov.io/gh/Malachov/mydatapreprocessing)
+
+Load data from web link or local file (json, csv, Excel file, parquet, h5...), consolidate it (resample data, clean NaN values, do string embedding) derive new features via columns derivation and do preprocessing like
+standardization or smoothing. If you want to see how functions works, check it's docstrings - working examples with printed results are also in tests - visual.py.
+
+## Links
+
+[Repo on GitHub](https://github.com/Malachov/mydatapreprocessing)
+
+[Official readthedocs documentation](https://mydatapreprocessing.readthedocs.io)
+
+
+## Installation
+
+Python >=3.6 (Python 2 is not supported).
+
+Install just with
+
+```console
+pip install mydatapreprocessing
+```
+
+There are some libraries that not every user will be using (for some specific data inputs for example). If you want to be sure to have all libraries, you can provide extras requirements like.
+
+```console
+pip install mydatapreprocessing[datatypes]
+```
+
+Available extras are ["all", "datasets", "datatypes"]
+
+
+## Examples
+
+You can use live [jupyter demo on binder](https://mybinder.org/v2/gh/Malachov/mydatapreprocessing/HEAD?filepath=demo.ipynb)
+
+<!--phmdoctest-setup-->
+```python
+import mydatapreprocessing as mdp
+import pandas as pd
+import numpy as np
+```
+
+### Load data
+
+You can use:
+
+- python formats (numpy.ndarray, pd.DataFrame, list, tuple, dict)
+- local files
+- web urls
+
+Supported path formats are:
+
+- csv
+- xlsx and xls
+- json
+- parquet
+- h5
+
+You can load more data at once in list.
+
+Syntax is always the same.
+
+<!--phmdoctest-label test_load_data-->
+<!--phmdoctest-share-names-->
+```python
+data = mdp.load_data.load_data(
+ "https://raw.githubusercontent.com/jbrownlee/Datasets/master/daily-min-temperatures.csv",
+)
+# data2 = mdp.load_data.load_data([PATH_TO_FILE.csv, PATH_TO_FILE2.csv])
+```
+
+### Consolidation
+If you want to use data for some machine learning models, you will probably want to remove Nan values, convert string columns to numeric if possible, do encoding or keep only numeric data and resample.
+
+Consolidation is working with pandas DataFrame as column names matters here.
+
+There are many functions, but there is main function pipelining other functions `consolidate_data`
+
+
+<!--phmdoctest-label test_consolidation-->
+<!--phmdoctest-share-names-->
+```python
+consolidation_config = mdp.consolidation.consolidation_config.default_consolidation_config.do.copy()
+consolidation_config.datetime.datetime_column = 'Date'
+consolidation_config.resample.resample = 'M'
+consolidation_config.resample.resample_function = "mean"
+consolidation_config.dtype = 'float32'
+
+consolidated = mdp.consolidation.consolidate_data(data, consolidation_config)
+print(consolidated.head())
+```
+
+### Feature engineering
+Functions in `feature_engineering` and `preprocessing` expects that data are in form (*n_samples*, *n_features*).
+*n_samples* are usually much bigger and therefore transformed in `consolidate_data` if necessary.
+
+In config, you can use shorter update dict syntax as all values names are unique.
+
+### Feature engineering
+
+Create new columns that can be for example used as another machine learning model input.
+
+```python
+import mydatapreprocessing.feature_engineering as mdpf
+import mydatapreprocessing as mdp
+
+data = pd.DataFrame(
+ [mdp.datasets.sin(n=30), mdp.datasets.ramp(n=30)]
+).T
+
+extended = mdpf.add_derived_columns(data, differences=True, rolling_means=10)
+print(extended.columns)
+print(f"\nit has less rows then on input {len(extended)}")
+```
+
+Functions in `feature_engineering` and `preprocessing` expects that data are in form (n_samples, n_features). n_samples are usually much bigger and therefore transformed in `consolidate_data`
+if necessary.
+
+### Preprocessing
+
+Preprocessing can be used on pandas DataFrame as well as on numpy array. Column names are not important as it's just matrix with defined dtype.
+
+There is many functions, but there is main function pipelining other functions `preprocess_data` Preprocessed data can be converted back with `preprocess_data_inverse`
+
+
+<!--phmdoctest-label test_preprocess_data-->
+<!--phmdoctest-share-names-->
+```python
+
+from mydatapreprocessing import preprocessing as mdpp
+
+df = pd.DataFrame(np.array([range(5), range(20, 25), np.random.randn(5)]).astype("float32").T)
+df.iloc[2, 0] = 500
+
+config = mdpp.preprocessing_config.default_preprocessing_config.do.copy()
+config.do.update({"remove_outliers": None, "difference_transform": True, "standardize": "standardize"})
+data_preprocessed, inverse_config = mdpp.preprocess_data(df.values, config)
+inverse_config.difference_transform = df.iloc[0, 0]
+data_preprocessed_inverse = mdpp.preprocess_data_inverse(
+ data_preprocessed[:, 0], inverse_config
+)
+```
+
+
+
+
+%package help
+Summary: Development documents and examples for mydatapreprocessing
+Provides: python3-mydatapreprocessing-doc
+%description help
+# mydatapreprocessing
+
+[![Python versions](https://img.shields.io/pypi/pyversions/mydatapreprocessing.svg)](https://pypi.python.org/pypi/mydatapreprocessing/) [![PyPI version](https://badge.fury.io/py/mydatapreprocessing.svg)](https://badge.fury.io/py/mydatapreprocessing) [![Downloads](https://pepy.tech/badge/mydatapreprocessing)](https://pepy.tech/project/mydatapreprocessing) [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/Malachov/mydatapreprocessing/HEAD?filepath=demo.ipynb) [![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/Malachov/mydatapreprocessing.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/Malachov/mydatapreprocessing/context:python) [![Documentation Status](https://readthedocs.org/projects/mydatapreprocessing/badge/?version=latest)](https://mydatapreprocessing.readthedocs.io/?badge=latest) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![codecov](https://codecov.io/gh/Malachov/mydatapreprocessing/branch/master/graph/badge.svg)](https://codecov.io/gh/Malachov/mydatapreprocessing)
+
+Load data from web link or local file (json, csv, Excel file, parquet, h5...), consolidate it (resample data, clean NaN values, do string embedding) derive new features via columns derivation and do preprocessing like
+standardization or smoothing. If you want to see how functions works, check it's docstrings - working examples with printed results are also in tests - visual.py.
+
+## Links
+
+[Repo on GitHub](https://github.com/Malachov/mydatapreprocessing)
+
+[Official readthedocs documentation](https://mydatapreprocessing.readthedocs.io)
+
+
+## Installation
+
+Python >=3.6 (Python 2 is not supported).
+
+Install just with
+
+```console
+pip install mydatapreprocessing
+```
+
+There are some libraries that not every user will be using (for some specific data inputs for example). If you want to be sure to have all libraries, you can provide extras requirements like.
+
+```console
+pip install mydatapreprocessing[datatypes]
+```
+
+Available extras are ["all", "datasets", "datatypes"]
+
+
+## Examples
+
+You can use live [jupyter demo on binder](https://mybinder.org/v2/gh/Malachov/mydatapreprocessing/HEAD?filepath=demo.ipynb)
+
+<!--phmdoctest-setup-->
+```python
+import mydatapreprocessing as mdp
+import pandas as pd
+import numpy as np
+```
+
+### Load data
+
+You can use:
+
+- python formats (numpy.ndarray, pd.DataFrame, list, tuple, dict)
+- local files
+- web urls
+
+Supported path formats are:
+
+- csv
+- xlsx and xls
+- json
+- parquet
+- h5
+
+You can load more data at once in list.
+
+Syntax is always the same.
+
+<!--phmdoctest-label test_load_data-->
+<!--phmdoctest-share-names-->
+```python
+data = mdp.load_data.load_data(
+ "https://raw.githubusercontent.com/jbrownlee/Datasets/master/daily-min-temperatures.csv",
+)
+# data2 = mdp.load_data.load_data([PATH_TO_FILE.csv, PATH_TO_FILE2.csv])
+```
+
+### Consolidation
+If you want to use data for some machine learning models, you will probably want to remove Nan values, convert string columns to numeric if possible, do encoding or keep only numeric data and resample.
+
+Consolidation is working with pandas DataFrame as column names matters here.
+
+There are many functions, but there is main function pipelining other functions `consolidate_data`
+
+
+<!--phmdoctest-label test_consolidation-->
+<!--phmdoctest-share-names-->
+```python
+consolidation_config = mdp.consolidation.consolidation_config.default_consolidation_config.do.copy()
+consolidation_config.datetime.datetime_column = 'Date'
+consolidation_config.resample.resample = 'M'
+consolidation_config.resample.resample_function = "mean"
+consolidation_config.dtype = 'float32'
+
+consolidated = mdp.consolidation.consolidate_data(data, consolidation_config)
+print(consolidated.head())
+```
+
+### Feature engineering
+Functions in `feature_engineering` and `preprocessing` expects that data are in form (*n_samples*, *n_features*).
+*n_samples* are usually much bigger and therefore transformed in `consolidate_data` if necessary.
+
+In config, you can use shorter update dict syntax as all values names are unique.
+
+### Feature engineering
+
+Create new columns that can be for example used as another machine learning model input.
+
+```python
+import mydatapreprocessing.feature_engineering as mdpf
+import mydatapreprocessing as mdp
+
+data = pd.DataFrame(
+ [mdp.datasets.sin(n=30), mdp.datasets.ramp(n=30)]
+).T
+
+extended = mdpf.add_derived_columns(data, differences=True, rolling_means=10)
+print(extended.columns)
+print(f"\nit has less rows then on input {len(extended)}")
+```
+
+Functions in `feature_engineering` and `preprocessing` expects that data are in form (n_samples, n_features). n_samples are usually much bigger and therefore transformed in `consolidate_data`
+if necessary.
+
+### Preprocessing
+
+Preprocessing can be used on pandas DataFrame as well as on numpy array. Column names are not important as it's just matrix with defined dtype.
+
+There is many functions, but there is main function pipelining other functions `preprocess_data` Preprocessed data can be converted back with `preprocess_data_inverse`
+
+
+<!--phmdoctest-label test_preprocess_data-->
+<!--phmdoctest-share-names-->
+```python
+
+from mydatapreprocessing import preprocessing as mdpp
+
+df = pd.DataFrame(np.array([range(5), range(20, 25), np.random.randn(5)]).astype("float32").T)
+df.iloc[2, 0] = 500
+
+config = mdpp.preprocessing_config.default_preprocessing_config.do.copy()
+config.do.update({"remove_outliers": None, "difference_transform": True, "standardize": "standardize"})
+data_preprocessed, inverse_config = mdpp.preprocess_data(df.values, config)
+inverse_config.difference_transform = df.iloc[0, 0]
+data_preprocessed_inverse = mdpp.preprocess_data_inverse(
+ data_preprocessed[:, 0], inverse_config
+)
+```
+
+
+
+
+%prep
+%autosetup -n mydatapreprocessing-3.0.3
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-mydatapreprocessing -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Mon May 15 2023 Python_Bot <Python_Bot@openeuler.org> - 3.0.3-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..8105f23
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+5e42bac4feb41e131dafbdde2cb6f31e mydatapreprocessing-3.0.3.tar.gz