summaryrefslogtreecommitdiff
path: root/python-mllytics.spec
blob: 8b0219233888d56cba0424e06a0ea29e65f95ce1 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
%global _empty_manifest_terminate_build 0
Name:		python-MLLytics
Version:	0.2.2
Release:	1
Summary:	A library of tools for easier evaluation of ML models.
License:	MIT
URL:		https://github.com/scottclay/MLLytics
Source0:	https://mirrors.nju.edu.cn/pypi/web/packages/5f/f2/5a26529eb02ab005060644781f7f6b28717cc1f16e5c62cbe3a95bfd0fbc/MLLytics-0.2.2.tar.gz
BuildArch:	noarch

Requires:	python3-numpy
Requires:	python3-matplotlib
Requires:	python3-seaborn
Requires:	python3-pandas
Requires:	python3-scikit-learn

%description
[![Upload Python Package](https://github.com/scottclay/MLLytics/actions/workflows/python-publish.yml/badge.svg)](https://github.com/scottclay/MLLytics/actions/workflows/python-publish.yml)

# MLLytics

## Installation instructions
```pip install MLLytics```
or
```python setup.py install```
or
``` conda env create -f environment.yml```

## Future
### Improvements and cleanup
* Comment all functions and classes
* Add type hinting to all functions and classes (https://mypy.readthedocs.io/en/latest/cheat_sheet_py3.html)
* Scoring functions
* More output stats in overviews
* Update reliability plot https://machinelearningmastery.com/calibrated-classification-model-in-scikit-learn/
* Tests
* Switch from my metrics to sklearn metrics where it makes sense? aka
```fpr, tpr, thresholds = roc_curve(y[test], probas_[:, 1])```
and more general macro/micro average metrics from: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html#sklearn.metrics.recall_score
* Additional metrics (sensitivity, specificity, precision, negative predictive value, FPR, FNR,
false discovery rate, accuracy, F1 score

### Cosmetic
* Fix size of confusion matrix
* Check works with matplotlib 3
* Tidy up legends and annotation text on plots
* Joy plots
* Brier score for calibration plot
* Tidy up cross validation and plots (also repeated cross-validation)
* Acc-thresholds graph

### Recently completed
* ~Allow figure size and font sizes to be passed into plotting functions~
* ~Example guides for each function in jupyter notebooks~
* ~MultiClassMetrics class to inherit from ClassMetrics and share common functions~
* ~REGRESSION~

## Contributing Authors
* Scott Clay
* David Sullivan




%package -n python3-MLLytics
Summary:	A library of tools for easier evaluation of ML models.
Provides:	python-MLLytics
BuildRequires:	python3-devel
BuildRequires:	python3-setuptools
BuildRequires:	python3-pip
%description -n python3-MLLytics
[![Upload Python Package](https://github.com/scottclay/MLLytics/actions/workflows/python-publish.yml/badge.svg)](https://github.com/scottclay/MLLytics/actions/workflows/python-publish.yml)

# MLLytics

## Installation instructions
```pip install MLLytics```
or
```python setup.py install```
or
``` conda env create -f environment.yml```

## Future
### Improvements and cleanup
* Comment all functions and classes
* Add type hinting to all functions and classes (https://mypy.readthedocs.io/en/latest/cheat_sheet_py3.html)
* Scoring functions
* More output stats in overviews
* Update reliability plot https://machinelearningmastery.com/calibrated-classification-model-in-scikit-learn/
* Tests
* Switch from my metrics to sklearn metrics where it makes sense? aka
```fpr, tpr, thresholds = roc_curve(y[test], probas_[:, 1])```
and more general macro/micro average metrics from: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html#sklearn.metrics.recall_score
* Additional metrics (sensitivity, specificity, precision, negative predictive value, FPR, FNR,
false discovery rate, accuracy, F1 score

### Cosmetic
* Fix size of confusion matrix
* Check works with matplotlib 3
* Tidy up legends and annotation text on plots
* Joy plots
* Brier score for calibration plot
* Tidy up cross validation and plots (also repeated cross-validation)
* Acc-thresholds graph

### Recently completed
* ~Allow figure size and font sizes to be passed into plotting functions~
* ~Example guides for each function in jupyter notebooks~
* ~MultiClassMetrics class to inherit from ClassMetrics and share common functions~
* ~REGRESSION~

## Contributing Authors
* Scott Clay
* David Sullivan




%package help
Summary:	Development documents and examples for MLLytics
Provides:	python3-MLLytics-doc
%description help
[![Upload Python Package](https://github.com/scottclay/MLLytics/actions/workflows/python-publish.yml/badge.svg)](https://github.com/scottclay/MLLytics/actions/workflows/python-publish.yml)

# MLLytics

## Installation instructions
```pip install MLLytics```
or
```python setup.py install```
or
``` conda env create -f environment.yml```

## Future
### Improvements and cleanup
* Comment all functions and classes
* Add type hinting to all functions and classes (https://mypy.readthedocs.io/en/latest/cheat_sheet_py3.html)
* Scoring functions
* More output stats in overviews
* Update reliability plot https://machinelearningmastery.com/calibrated-classification-model-in-scikit-learn/
* Tests
* Switch from my metrics to sklearn metrics where it makes sense? aka
```fpr, tpr, thresholds = roc_curve(y[test], probas_[:, 1])```
and more general macro/micro average metrics from: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html#sklearn.metrics.recall_score
* Additional metrics (sensitivity, specificity, precision, negative predictive value, FPR, FNR,
false discovery rate, accuracy, F1 score

### Cosmetic
* Fix size of confusion matrix
* Check works with matplotlib 3
* Tidy up legends and annotation text on plots
* Joy plots
* Brier score for calibration plot
* Tidy up cross validation and plots (also repeated cross-validation)
* Acc-thresholds graph

### Recently completed
* ~Allow figure size and font sizes to be passed into plotting functions~
* ~Example guides for each function in jupyter notebooks~
* ~MultiClassMetrics class to inherit from ClassMetrics and share common functions~
* ~REGRESSION~

## Contributing Authors
* Scott Clay
* David Sullivan




%prep
%autosetup -n MLLytics-0.2.2

%build
%py3_build

%install
%py3_install
install -d -m755 %{buildroot}/%{_pkgdocdir}
if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
pushd %{buildroot}
if [ -d usr/lib ]; then
	find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/lib64 ]; then
	find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/bin ]; then
	find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/sbin ]; then
	find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
fi
touch doclist.lst
if [ -d usr/share/man ]; then
	find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
fi
popd
mv %{buildroot}/filelist.lst .
mv %{buildroot}/doclist.lst .

%files -n python3-MLLytics -f filelist.lst
%dir %{python3_sitelib}/*

%files help -f doclist.lst
%{_docdir}/*

%changelog
* Mon May 15 2023 Python_Bot <Python_Bot@openeuler.org> - 0.2.2-1
- Package Spec generated