%global _empty_manifest_terminate_build 0 Name: python-MLLytics Version: 0.2.2 Release: 1 Summary: A library of tools for easier evaluation of ML models. License: MIT URL: https://github.com/scottclay/MLLytics Source0: https://mirrors.nju.edu.cn/pypi/web/packages/5f/f2/5a26529eb02ab005060644781f7f6b28717cc1f16e5c62cbe3a95bfd0fbc/MLLytics-0.2.2.tar.gz BuildArch: noarch Requires: python3-numpy Requires: python3-matplotlib Requires: python3-seaborn Requires: python3-pandas Requires: python3-scikit-learn %description [![Upload Python Package](https://github.com/scottclay/MLLytics/actions/workflows/python-publish.yml/badge.svg)](https://github.com/scottclay/MLLytics/actions/workflows/python-publish.yml) # MLLytics ## Installation instructions ```pip install MLLytics``` or ```python setup.py install``` or ``` conda env create -f environment.yml``` ## Future ### Improvements and cleanup * Comment all functions and classes * Add type hinting to all functions and classes (https://mypy.readthedocs.io/en/latest/cheat_sheet_py3.html) * Scoring functions * More output stats in overviews * Update reliability plot https://machinelearningmastery.com/calibrated-classification-model-in-scikit-learn/ * Tests * Switch from my metrics to sklearn metrics where it makes sense? aka ```fpr, tpr, thresholds = roc_curve(y[test], probas_[:, 1])``` and more general macro/micro average metrics from: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html#sklearn.metrics.recall_score * Additional metrics (sensitivity, specificity, precision, negative predictive value, FPR, FNR, false discovery rate, accuracy, F1 score ### Cosmetic * Fix size of confusion matrix * Check works with matplotlib 3 * Tidy up legends and annotation text on plots * Joy plots * Brier score for calibration plot * Tidy up cross validation and plots (also repeated cross-validation) * Acc-thresholds graph ### Recently completed * ~Allow figure size and font sizes to be passed into plotting functions~ * ~Example guides for each function in jupyter notebooks~ * ~MultiClassMetrics class to inherit from ClassMetrics and share common functions~ * ~REGRESSION~ ## Contributing Authors * Scott Clay * David Sullivan %package -n python3-MLLytics Summary: A library of tools for easier evaluation of ML models. Provides: python-MLLytics BuildRequires: python3-devel BuildRequires: python3-setuptools BuildRequires: python3-pip %description -n python3-MLLytics [![Upload Python Package](https://github.com/scottclay/MLLytics/actions/workflows/python-publish.yml/badge.svg)](https://github.com/scottclay/MLLytics/actions/workflows/python-publish.yml) # MLLytics ## Installation instructions ```pip install MLLytics``` or ```python setup.py install``` or ``` conda env create -f environment.yml``` ## Future ### Improvements and cleanup * Comment all functions and classes * Add type hinting to all functions and classes (https://mypy.readthedocs.io/en/latest/cheat_sheet_py3.html) * Scoring functions * More output stats in overviews * Update reliability plot https://machinelearningmastery.com/calibrated-classification-model-in-scikit-learn/ * Tests * Switch from my metrics to sklearn metrics where it makes sense? aka ```fpr, tpr, thresholds = roc_curve(y[test], probas_[:, 1])``` and more general macro/micro average metrics from: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html#sklearn.metrics.recall_score * Additional metrics (sensitivity, specificity, precision, negative predictive value, FPR, FNR, false discovery rate, accuracy, F1 score ### Cosmetic * Fix size of confusion matrix * Check works with matplotlib 3 * Tidy up legends and annotation text on plots * Joy plots * Brier score for calibration plot * Tidy up cross validation and plots (also repeated cross-validation) * Acc-thresholds graph ### Recently completed * ~Allow figure size and font sizes to be passed into plotting functions~ * ~Example guides for each function in jupyter notebooks~ * ~MultiClassMetrics class to inherit from ClassMetrics and share common functions~ * ~REGRESSION~ ## Contributing Authors * Scott Clay * David Sullivan %package help Summary: Development documents and examples for MLLytics Provides: python3-MLLytics-doc %description help [![Upload Python Package](https://github.com/scottclay/MLLytics/actions/workflows/python-publish.yml/badge.svg)](https://github.com/scottclay/MLLytics/actions/workflows/python-publish.yml) # MLLytics ## Installation instructions ```pip install MLLytics``` or ```python setup.py install``` or ``` conda env create -f environment.yml``` ## Future ### Improvements and cleanup * Comment all functions and classes * Add type hinting to all functions and classes (https://mypy.readthedocs.io/en/latest/cheat_sheet_py3.html) * Scoring functions * More output stats in overviews * Update reliability plot https://machinelearningmastery.com/calibrated-classification-model-in-scikit-learn/ * Tests * Switch from my metrics to sklearn metrics where it makes sense? aka ```fpr, tpr, thresholds = roc_curve(y[test], probas_[:, 1])``` and more general macro/micro average metrics from: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html#sklearn.metrics.recall_score * Additional metrics (sensitivity, specificity, precision, negative predictive value, FPR, FNR, false discovery rate, accuracy, F1 score ### Cosmetic * Fix size of confusion matrix * Check works with matplotlib 3 * Tidy up legends and annotation text on plots * Joy plots * Brier score for calibration plot * Tidy up cross validation and plots (also repeated cross-validation) * Acc-thresholds graph ### Recently completed * ~Allow figure size and font sizes to be passed into plotting functions~ * ~Example guides for each function in jupyter notebooks~ * ~MultiClassMetrics class to inherit from ClassMetrics and share common functions~ * ~REGRESSION~ ## Contributing Authors * Scott Clay * David Sullivan %prep %autosetup -n MLLytics-0.2.2 %build %py3_build %install %py3_install install -d -m755 %{buildroot}/%{_pkgdocdir} if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi pushd %{buildroot} if [ -d usr/lib ]; then find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/lib64 ]; then find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/bin ]; then find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/sbin ]; then find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst fi touch doclist.lst if [ -d usr/share/man ]; then find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst fi popd mv %{buildroot}/filelist.lst . mv %{buildroot}/doclist.lst . %files -n python3-MLLytics -f filelist.lst %dir %{python3_sitelib}/* %files help -f doclist.lst %{_docdir}/* %changelog * Mon May 15 2023 Python_Bot - 0.2.2-1 - Package Spec generated