diff options
author | CoprDistGit <infra@openeuler.org> | 2023-05-15 06:05:27 +0000 |
---|---|---|
committer | CoprDistGit <infra@openeuler.org> | 2023-05-15 06:05:27 +0000 |
commit | bd8d6ab4f8f8c87ed497802f3a48c4ab72da48e3 (patch) | |
tree | f2fcfe7ed470917369a6b2dc0a9c2ff7a18107a8 | |
parent | 8e6fc2b1d5fd688a7d8563275b5751131c40c493 (diff) |
automatic import of python-mllytics
-rw-r--r-- | .gitignore | 1 | ||||
-rw-r--r-- | python-mllytics.spec | 212 | ||||
-rw-r--r-- | sources | 1 |
3 files changed, 214 insertions, 0 deletions
@@ -0,0 +1 @@ +/MLLytics-0.2.2.tar.gz diff --git a/python-mllytics.spec b/python-mllytics.spec new file mode 100644 index 0000000..8b02192 --- /dev/null +++ b/python-mllytics.spec @@ -0,0 +1,212 @@ +%global _empty_manifest_terminate_build 0 +Name: python-MLLytics +Version: 0.2.2 +Release: 1 +Summary: A library of tools for easier evaluation of ML models. +License: MIT +URL: https://github.com/scottclay/MLLytics +Source0: https://mirrors.nju.edu.cn/pypi/web/packages/5f/f2/5a26529eb02ab005060644781f7f6b28717cc1f16e5c62cbe3a95bfd0fbc/MLLytics-0.2.2.tar.gz +BuildArch: noarch + +Requires: python3-numpy +Requires: python3-matplotlib +Requires: python3-seaborn +Requires: python3-pandas +Requires: python3-scikit-learn + +%description +[](https://github.com/scottclay/MLLytics/actions/workflows/python-publish.yml) + +# MLLytics + +## Installation instructions +```pip install MLLytics``` +or +```python setup.py install``` +or +``` conda env create -f environment.yml``` + +## Future +### Improvements and cleanup +* Comment all functions and classes +* Add type hinting to all functions and classes (https://mypy.readthedocs.io/en/latest/cheat_sheet_py3.html) +* Scoring functions +* More output stats in overviews +* Update reliability plot https://machinelearningmastery.com/calibrated-classification-model-in-scikit-learn/ +* Tests +* Switch from my metrics to sklearn metrics where it makes sense? aka +```fpr, tpr, thresholds = roc_curve(y[test], probas_[:, 1])``` +and more general macro/micro average metrics from: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html#sklearn.metrics.recall_score +* Additional metrics (sensitivity, specificity, precision, negative predictive value, FPR, FNR, +false discovery rate, accuracy, F1 score + +### Cosmetic +* Fix size of confusion matrix +* Check works with matplotlib 3 +* Tidy up legends and annotation text on plots +* Joy plots +* Brier score for calibration plot +* Tidy up cross validation and plots (also repeated cross-validation) +* Acc-thresholds graph + +### Recently completed +* ~Allow figure size and font sizes to be passed into plotting functions~ +* ~Example guides for each function in jupyter notebooks~ +* ~MultiClassMetrics class to inherit from ClassMetrics and share common functions~ +* ~REGRESSION~ + +## Contributing Authors +* Scott Clay +* David Sullivan + + + + +%package -n python3-MLLytics +Summary: A library of tools for easier evaluation of ML models. +Provides: python-MLLytics +BuildRequires: python3-devel +BuildRequires: python3-setuptools +BuildRequires: python3-pip +%description -n python3-MLLytics +[](https://github.com/scottclay/MLLytics/actions/workflows/python-publish.yml) + +# MLLytics + +## Installation instructions +```pip install MLLytics``` +or +```python setup.py install``` +or +``` conda env create -f environment.yml``` + +## Future +### Improvements and cleanup +* Comment all functions and classes +* Add type hinting to all functions and classes (https://mypy.readthedocs.io/en/latest/cheat_sheet_py3.html) +* Scoring functions +* More output stats in overviews +* Update reliability plot https://machinelearningmastery.com/calibrated-classification-model-in-scikit-learn/ +* Tests +* Switch from my metrics to sklearn metrics where it makes sense? aka +```fpr, tpr, thresholds = roc_curve(y[test], probas_[:, 1])``` +and more general macro/micro average metrics from: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html#sklearn.metrics.recall_score +* Additional metrics (sensitivity, specificity, precision, negative predictive value, FPR, FNR, +false discovery rate, accuracy, F1 score + +### Cosmetic +* Fix size of confusion matrix +* Check works with matplotlib 3 +* Tidy up legends and annotation text on plots +* Joy plots +* Brier score for calibration plot +* Tidy up cross validation and plots (also repeated cross-validation) +* Acc-thresholds graph + +### Recently completed +* ~Allow figure size and font sizes to be passed into plotting functions~ +* ~Example guides for each function in jupyter notebooks~ +* ~MultiClassMetrics class to inherit from ClassMetrics and share common functions~ +* ~REGRESSION~ + +## Contributing Authors +* Scott Clay +* David Sullivan + + + + +%package help +Summary: Development documents and examples for MLLytics +Provides: python3-MLLytics-doc +%description help +[](https://github.com/scottclay/MLLytics/actions/workflows/python-publish.yml) + +# MLLytics + +## Installation instructions +```pip install MLLytics``` +or +```python setup.py install``` +or +``` conda env create -f environment.yml``` + +## Future +### Improvements and cleanup +* Comment all functions and classes +* Add type hinting to all functions and classes (https://mypy.readthedocs.io/en/latest/cheat_sheet_py3.html) +* Scoring functions +* More output stats in overviews +* Update reliability plot https://machinelearningmastery.com/calibrated-classification-model-in-scikit-learn/ +* Tests +* Switch from my metrics to sklearn metrics where it makes sense? aka +```fpr, tpr, thresholds = roc_curve(y[test], probas_[:, 1])``` +and more general macro/micro average metrics from: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html#sklearn.metrics.recall_score +* Additional metrics (sensitivity, specificity, precision, negative predictive value, FPR, FNR, +false discovery rate, accuracy, F1 score + +### Cosmetic +* Fix size of confusion matrix +* Check works with matplotlib 3 +* Tidy up legends and annotation text on plots +* Joy plots +* Brier score for calibration plot +* Tidy up cross validation and plots (also repeated cross-validation) +* Acc-thresholds graph + +### Recently completed +* ~Allow figure size and font sizes to be passed into plotting functions~ +* ~Example guides for each function in jupyter notebooks~ +* ~MultiClassMetrics class to inherit from ClassMetrics and share common functions~ +* ~REGRESSION~ + +## Contributing Authors +* Scott Clay +* David Sullivan + + + + +%prep +%autosetup -n MLLytics-0.2.2 + +%build +%py3_build + +%install +%py3_install +install -d -m755 %{buildroot}/%{_pkgdocdir} +if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi +if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi +if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi +if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi +pushd %{buildroot} +if [ -d usr/lib ]; then + find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/lib64 ]; then + find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/bin ]; then + find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/sbin ]; then + find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst +fi +touch doclist.lst +if [ -d usr/share/man ]; then + find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst +fi +popd +mv %{buildroot}/filelist.lst . +mv %{buildroot}/doclist.lst . + +%files -n python3-MLLytics -f filelist.lst +%dir %{python3_sitelib}/* + +%files help -f doclist.lst +%{_docdir}/* + +%changelog +* Mon May 15 2023 Python_Bot <Python_Bot@openeuler.org> - 0.2.2-1 +- Package Spec generated @@ -0,0 +1 @@ +a5fb264b2e97dbb7c38967581ce1b8af MLLytics-0.2.2.tar.gz |