1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
|
%global _empty_manifest_terminate_build 0
Name: python-interpret-community
Version: 0.29.0
Release: 1
Summary: Microsoft Interpret Extensions SDK for Python
License: MIT License
URL: https://github.com/interpretml/interpret-community
Source0: https://mirrors.nju.edu.cn/pypi/web/packages/8a/c6/080734fd96bb9ea46f32e19ae24156c42342bd1d42055f9dda4e9d18877e/interpret_community-0.29.0.tar.gz
BuildArch: noarch
Requires: python3-numpy
Requires: python3-pandas
Requires: python3-scipy
Requires: python3-ml-wrappers
Requires: python3-scikit-learn
Requires: python3-packaging
Requires: python3-interpret-core[required]
Requires: python3-shap
Requires: python3-raiutils
Requires: python3-tensorflow
Requires: python3-pyyaml
Requires: python3-keras
Requires: python3-lime
Requires: python3-lightgbm
Requires: python3-hdbscan
%description
# Microsoft Interpret Community SDK for Python
### This package has been tested with Python 3.7, 3.8 and 3.9
The Interpret Community SDK builds on Interpret, an open source python package from Microsoft Research for training interpretable models, and helps to explain blackbox systems by adding additional extensions from the community to interpret ML models.
Interpret-Community is an experimental repository that hosts a wide range of community developed machine learning interpretability techniques. This repository makes it easy for anyone involved in the development of a machine learning system to improve transparency around their machine learning models. Data scientists, machine learning engineers, and researchers can easily add their own interpretability techniques via the set of extension hooks built into the peer repository, Interpret, and expand this repository to include their custom-made interpretability techniques.
Highlights of the package include:
- The TabularExplainer can be used to give local and global feature importances
- The best explainer is automatically chosen for the user based on the model
- Local feature importances are for each evaluation row
- Global feature importances summarize the most importance features at the model-level
- The API supports both dense (numpy or pandas) and sparse (scipy) datasets
- There are utilities provided to convert engineered explanations, based on preprocessed data before training a model, to raw explanations on the original dataset
- For more advanced users, individual explainers can be used
- The KernelExplainer, GPUKernelExplainer, PFIExplainer and MimicExplainer are for BlackBox models
- The MimicExplainer is faster but less accurate than the KernelExplainer, and supports various surrogate model types
- The TreeExplainer is for tree-based models
- The LinearExplainer is for linear models
- The DeepExplainer is for DNN tensorflow or pytorch models
- The PFIExplainer can quickly compute global importance values
- LIMEExplainer builds local linear approximations of the model's behavior by perturbing each instance
- GPUKernelExplainer is GPU-accelerated implementation of SHAP's KernelExplainer as a part of RAPIDS's cuML library, and is optimized for GPU models, like those in cuML. It can be used with CPU-based estimators too.
- An adapter to convert any feature importance values to an interpret-community style explanation
Please see the github website for the documentation and sample notebooks:
https://github.com/interpretml/interpret-community
Auto-generated sphinx API documentation can be found here:
https://interpret-community.readthedocs.io/en/latest/index.html
More information on the ExplanationDashboard can be found here:
https://github.com/microsoft/responsible-ai-toolbox
%package -n python3-interpret-community
Summary: Microsoft Interpret Extensions SDK for Python
Provides: python-interpret-community
BuildRequires: python3-devel
BuildRequires: python3-setuptools
BuildRequires: python3-pip
%description -n python3-interpret-community
# Microsoft Interpret Community SDK for Python
### This package has been tested with Python 3.7, 3.8 and 3.9
The Interpret Community SDK builds on Interpret, an open source python package from Microsoft Research for training interpretable models, and helps to explain blackbox systems by adding additional extensions from the community to interpret ML models.
Interpret-Community is an experimental repository that hosts a wide range of community developed machine learning interpretability techniques. This repository makes it easy for anyone involved in the development of a machine learning system to improve transparency around their machine learning models. Data scientists, machine learning engineers, and researchers can easily add their own interpretability techniques via the set of extension hooks built into the peer repository, Interpret, and expand this repository to include their custom-made interpretability techniques.
Highlights of the package include:
- The TabularExplainer can be used to give local and global feature importances
- The best explainer is automatically chosen for the user based on the model
- Local feature importances are for each evaluation row
- Global feature importances summarize the most importance features at the model-level
- The API supports both dense (numpy or pandas) and sparse (scipy) datasets
- There are utilities provided to convert engineered explanations, based on preprocessed data before training a model, to raw explanations on the original dataset
- For more advanced users, individual explainers can be used
- The KernelExplainer, GPUKernelExplainer, PFIExplainer and MimicExplainer are for BlackBox models
- The MimicExplainer is faster but less accurate than the KernelExplainer, and supports various surrogate model types
- The TreeExplainer is for tree-based models
- The LinearExplainer is for linear models
- The DeepExplainer is for DNN tensorflow or pytorch models
- The PFIExplainer can quickly compute global importance values
- LIMEExplainer builds local linear approximations of the model's behavior by perturbing each instance
- GPUKernelExplainer is GPU-accelerated implementation of SHAP's KernelExplainer as a part of RAPIDS's cuML library, and is optimized for GPU models, like those in cuML. It can be used with CPU-based estimators too.
- An adapter to convert any feature importance values to an interpret-community style explanation
Please see the github website for the documentation and sample notebooks:
https://github.com/interpretml/interpret-community
Auto-generated sphinx API documentation can be found here:
https://interpret-community.readthedocs.io/en/latest/index.html
More information on the ExplanationDashboard can be found here:
https://github.com/microsoft/responsible-ai-toolbox
%package help
Summary: Development documents and examples for interpret-community
Provides: python3-interpret-community-doc
%description help
# Microsoft Interpret Community SDK for Python
### This package has been tested with Python 3.7, 3.8 and 3.9
The Interpret Community SDK builds on Interpret, an open source python package from Microsoft Research for training interpretable models, and helps to explain blackbox systems by adding additional extensions from the community to interpret ML models.
Interpret-Community is an experimental repository that hosts a wide range of community developed machine learning interpretability techniques. This repository makes it easy for anyone involved in the development of a machine learning system to improve transparency around their machine learning models. Data scientists, machine learning engineers, and researchers can easily add their own interpretability techniques via the set of extension hooks built into the peer repository, Interpret, and expand this repository to include their custom-made interpretability techniques.
Highlights of the package include:
- The TabularExplainer can be used to give local and global feature importances
- The best explainer is automatically chosen for the user based on the model
- Local feature importances are for each evaluation row
- Global feature importances summarize the most importance features at the model-level
- The API supports both dense (numpy or pandas) and sparse (scipy) datasets
- There are utilities provided to convert engineered explanations, based on preprocessed data before training a model, to raw explanations on the original dataset
- For more advanced users, individual explainers can be used
- The KernelExplainer, GPUKernelExplainer, PFIExplainer and MimicExplainer are for BlackBox models
- The MimicExplainer is faster but less accurate than the KernelExplainer, and supports various surrogate model types
- The TreeExplainer is for tree-based models
- The LinearExplainer is for linear models
- The DeepExplainer is for DNN tensorflow or pytorch models
- The PFIExplainer can quickly compute global importance values
- LIMEExplainer builds local linear approximations of the model's behavior by perturbing each instance
- GPUKernelExplainer is GPU-accelerated implementation of SHAP's KernelExplainer as a part of RAPIDS's cuML library, and is optimized for GPU models, like those in cuML. It can be used with CPU-based estimators too.
- An adapter to convert any feature importance values to an interpret-community style explanation
Please see the github website for the documentation and sample notebooks:
https://github.com/interpretml/interpret-community
Auto-generated sphinx API documentation can be found here:
https://interpret-community.readthedocs.io/en/latest/index.html
More information on the ExplanationDashboard can be found here:
https://github.com/microsoft/responsible-ai-toolbox
%prep
%autosetup -n interpret-community-0.29.0
%build
%py3_build
%install
%py3_install
install -d -m755 %{buildroot}/%{_pkgdocdir}
if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
pushd %{buildroot}
if [ -d usr/lib ]; then
find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/lib64 ]; then
find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/bin ]; then
find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/sbin ]; then
find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
fi
touch doclist.lst
if [ -d usr/share/man ]; then
find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
fi
popd
mv %{buildroot}/filelist.lst .
mv %{buildroot}/doclist.lst .
%files -n python3-interpret-community -f filelist.lst
%dir %{python3_sitelib}/*
%files help -f doclist.lst
%{_docdir}/*
%changelog
* Sun Apr 23 2023 Python_Bot <Python_Bot@openeuler.org> - 0.29.0-1
- Package Spec generated
|