1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
|
%global _empty_manifest_terminate_build 0
Name: python-text-explainability
Version: 0.7.0
Release: 1
Summary: Generic explainability architecture for text machine learning models
License: GNU LGPL v3
URL: https://text-explainability.readthedocs.io/
Source0: https://mirrors.nju.edu.cn/pypi/web/packages/a7/ec/731e27ff33b8ab47144b59300ce151a496151f4bb6c98ac55286b3ef24fc/text_explainability-0.7.0.tar.gz
BuildArch: noarch
Requires: python3-instancelib
Requires: python3-genbase
Requires: python3-scikit-learn
Requires: python3-plotly
Requires: python3-sentence-transformers
Requires: python3-scikit-learn-extra
Requires: python3-imodels
Requires: python3-genbase-test-helpers
Requires: python3-fastcountvectorizer
%description
`text_explainability` provides a **generic architecture** from which well-known state-of-the-art explainability approaches for text can be composed. This modular architecture allows components to be swapped out and combined, to **quickly develop new types of explainability approaches** for (natural language) text, or to **improve a plethora of approaches by improving a single module**.
Several example methods are included, which provide **local explanations** (_explaining the prediction of a single instance_, e.g. `LIME` and `SHAP`) or **global explanations** (_explaining the dataset, or model behavior on the dataset_, e.g. `TokenFrequency` and `MMDCritic`). By replacing the default modules (e.g. local data generation, global data sampling or improved embedding methods), these methods can be improved upon or new methods can be introduced.
© Marcel Robeer, 2021
## Quick tour
**Local explanation**: explain a models' prediction on a given sample, self-provided or from a dataset.
```python
from text_explainability import LIME, LocalTree
# Define sample to explain
sample = 'Explain why this is positive and not negative!'
# LIME explanation (local feature importance)
LIME().explain(sample, model).scores
# List of local rules, extracted from tree
LocalTree().explain(sample, model).rules
```
**Global explanation**: explain the whole dataset (e.g. train set, test set), and what they look like for the ground-truth or predicted labels.
```python
from text_explainability import import_data, TokenFrequency, MMDCritic
# Import dataset
env = import_data('./datasets/test.csv', data_cols=['fulltext'], label_cols=['label'])
# Top-k most frequent tokens per label
TokenFrequency(env.dataset).explain(labelprovider=env.labels, explain_model=False, k=3)
# 2 prototypes and 1 criticisms for the dataset
MMDCritic(env.dataset)(n_prototypes=2, n_criticisms=1)
```
## Installation
See the [installation](docs/INSTALLATION.md) instructions for an extended installation guide.
| Method | Instructions |
|--------|--------------|
| `pip` | Install from [PyPI](https://pypi.org/project/text-explainability/) via `pip3 install text_explainability`. To speed up the explanation generation process use `pip3 install text_explainability[fast]`. |
| Local | Clone this repository and install via `pip3 install -e .` or locally run `python3 setup.py install`.
## Documentation
Full documentation of the latest version is provided at [https://text-explainability.readthedocs.io/](https://text-explainability.readthedocs.io/).
## Example usage
See [example usage](example_usage.md) to see an example of how the package can be used, or run the lines in `example_usage.py` to do explore it interactively.
## Explanation methods included
`text_explainability` includes methods for model-agnostic _local explanation_ and _global explanation_. Each of these methods can be fully customized to fit the explainees' needs.
| Type | Explanation method | Description | Paper/link |
|------|--------------------|-------------|-------|
| *Local explanation* | `LIME` | Calculate feature attribution with _Local Intepretable Model-Agnostic Explanations_ (LIME). | [[Ribeiro2016](https://paperswithcode.com/method/lime)], [interpretable-ml/lime](https://christophm.github.io/interpretable-ml-book/lime.html) |
| | `KernelSHAP` | Calculate feature attribution with _Shapley Additive Explanations_ (SHAP). | [[Lundberg2017](https://paperswithcode.com/paper/a-unified-approach-to-interpreting-model)], [interpretable-ml/shap](https://christophm.github.io/interpretable-ml-book/shap.html) |
| | `LocalTree` | Fit a local decision tree around a single decision. | [[Guidotti2018](https://paperswithcode.com/paper/local-rule-based-explanations-of-black-box)] |
| | `LocalRules` | Fit a local sparse set of label-specific rules using `SkopeRules`. | [github/skope-rules](https://github.com/scikit-learn-contrib/skope-rules) |
| | `FoilTree` | Fit a local contrastive/counterfactual decision tree around a single decision. | [[Robeer2018](https://github.com/MarcelRobeer/ContrastiveExplanation)] |
| | `BayLIME` | Bayesian extension of LIME for include prior knowledge and more consistent explanations. | [[Zhao201](https://paperswithcode.com/paper/baylime-bayesian-local-interpretable-model)] |
| *Global explanation* | `TokenFrequency` | Show the top-_k_ number of tokens for each ground-truth or predicted label. |
| | `TokenInformation` | Show the top-_k_ token mutual information for a dataset or model. | [wikipedia/mutual_information](https://en.wikipedia.org/wiki/Mutual_information) |
| | `KMedoids` | Embed instances and find top-_n_ prototypes (can also be performed for each label using `LabelwiseKMedoids`). | [interpretable-ml/prototypes](https://christophm.github.io/interpretable-ml-book/proto.html) |
| | `MMDCritic` | Embed instances and find top-_n_ prototypes and top-_n_ criticisms (can also be performed for each label using `LabelwiseMMDCritic`). | [[Kim2016](https://papers.nips.cc/paper/2016/hash/5680522b8e2bb01943234bce7bf84534-Abstract.html)], [interpretable-ml/prototypes](https://christophm.github.io/interpretable-ml-book/proto.html) |
## Releases
`text_explainability` is officially released through [PyPI](https://pypi.org/project/text-explainability/).
See [CHANGELOG.md](CHANGELOG.md) for a full overview of the changes for each version.
## Extensions
<a href="https://marcelrobeer.github.io/text_sensitivity/" target="_blank"><img src="https://git.science.uu.nl/m.j.robeer/text_sensitivity/-/raw/main/img/TextLogo-Logo_large_sensitivity.png" alt="T_xt sensitivity logo" width="200px"></a><p>`text_explainability` can be extended to also perform _sensitivity testing_, checking for machine learning model robustness and fairness. The `text_sensitivity` package is available through [PyPI](https://pypi.org/project/text-sensitivity/) and fully documented at [https://text-sensitivity.rtfd.io/](https://text-sensitivity.rtfd.io/).</p>
## Citation
```bibtex
@misc{text_explainability,
title = {Python package text\_explainability},
author = {Marcel Robeer},
howpublished = {\url{https://git.science.uu.nl/m.j.robeer/text_explainability}},
year = {2021}
}
```
## Maintenance
### Contributors
- [Marcel Robeer](https://www.uu.nl/staff/MJRobeer) (`@m.j.robeer`)
- [Michiel Bron](https://www.uu.nl/staff/MPBron) (`@mpbron-phd`)
### Todo
Tasks yet to be done:
* Implement local post-hoc explanations:
- Implement Anchors
* Implement global post-hoc explanations:
- Representative subset
* Add support for regression models
* More complex data augmentation
- Top-k replacement (e.g. according to LM / WordNet)
- Tokens to exclude from being changed
- Bag-of-words style replacements
* Add rule-based return type
* Write more tests
## Credits
- Florian Gardin, Ronan Gautier, Nicolas Goix, Bibi Ndiaye and Jean-Matthieu Schertzer. _[Skope-rules](https://github.com/scikit-learn-contrib/skope-rules)_. 2020.
- Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Dino Pedreschi, Franco Turini and Fosca Gianotti. _[Local Rule-Based Explanations of Black Box Decision Systems](https://paperswithcode.com/paper/local-rule-based-explanations-of-black-box)_. 2018.
- Been Kim, Rajiv Khanna and Oluwasanmi O. Koyejo. [Examples are not Enough, Learn to Criticize! Criticism for Interpretability](https://papers.nips.cc/paper/2016/hash/5680522b8e2bb01943234bce7bf84534-Abstract.html). _Advances in Neural Information Processing Systems (NIPS 2016)_. 2016.
- Scott Lundberg and Su-In Lee. [A Unified Approach to Interpreting Model Predictions](https://paperswithcode.com/paper/a-unified-approach-to-interpreting-model). _31st Conference on Neural Information Processing Systems (NIPS 2017)_. 2017.
- Christoph Molnar. _[Interpretable Machine Learning: A Guide for Making Black Box Models Explainable](https://christophm.github.io/interpretable-ml-book/)_. 2021.
- Marco Tulio Ribeiro, Sameer Singh and Carlos Guestrin. ["Why Should I Trust You?": Explaining the Predictions of Any Classifier](https://paperswithcode.com/method/lime). _Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2016)_. 2016.
- Marco Tulio Ribeiro, Sameer Singh and Carlos Guestrin. [Anchors: High-Precision Model-Agnostic Explanations](https://github.com/marcotcr/anchor). _AAAI Conference on Artificial Intelligence (AAAI)_. 2018.
- Jasper van der Waa, Marcel Robeer, Jurriaan van Diggelen, Matthieu Brinkhuis and Mark Neerincx. ["Contrastive Explanations with Local Foil Trees"](https://github.com/MarcelRobeer/ContrastiveExplanation). _2018 Workshop on Human Interpretability in Machine Learning (WHI 2018)_. 2018.
%package -n python3-text-explainability
Summary: Generic explainability architecture for text machine learning models
Provides: python-text-explainability
BuildRequires: python3-devel
BuildRequires: python3-setuptools
BuildRequires: python3-pip
%description -n python3-text-explainability
`text_explainability` provides a **generic architecture** from which well-known state-of-the-art explainability approaches for text can be composed. This modular architecture allows components to be swapped out and combined, to **quickly develop new types of explainability approaches** for (natural language) text, or to **improve a plethora of approaches by improving a single module**.
Several example methods are included, which provide **local explanations** (_explaining the prediction of a single instance_, e.g. `LIME` and `SHAP`) or **global explanations** (_explaining the dataset, or model behavior on the dataset_, e.g. `TokenFrequency` and `MMDCritic`). By replacing the default modules (e.g. local data generation, global data sampling or improved embedding methods), these methods can be improved upon or new methods can be introduced.
© Marcel Robeer, 2021
## Quick tour
**Local explanation**: explain a models' prediction on a given sample, self-provided or from a dataset.
```python
from text_explainability import LIME, LocalTree
# Define sample to explain
sample = 'Explain why this is positive and not negative!'
# LIME explanation (local feature importance)
LIME().explain(sample, model).scores
# List of local rules, extracted from tree
LocalTree().explain(sample, model).rules
```
**Global explanation**: explain the whole dataset (e.g. train set, test set), and what they look like for the ground-truth or predicted labels.
```python
from text_explainability import import_data, TokenFrequency, MMDCritic
# Import dataset
env = import_data('./datasets/test.csv', data_cols=['fulltext'], label_cols=['label'])
# Top-k most frequent tokens per label
TokenFrequency(env.dataset).explain(labelprovider=env.labels, explain_model=False, k=3)
# 2 prototypes and 1 criticisms for the dataset
MMDCritic(env.dataset)(n_prototypes=2, n_criticisms=1)
```
## Installation
See the [installation](docs/INSTALLATION.md) instructions for an extended installation guide.
| Method | Instructions |
|--------|--------------|
| `pip` | Install from [PyPI](https://pypi.org/project/text-explainability/) via `pip3 install text_explainability`. To speed up the explanation generation process use `pip3 install text_explainability[fast]`. |
| Local | Clone this repository and install via `pip3 install -e .` or locally run `python3 setup.py install`.
## Documentation
Full documentation of the latest version is provided at [https://text-explainability.readthedocs.io/](https://text-explainability.readthedocs.io/).
## Example usage
See [example usage](example_usage.md) to see an example of how the package can be used, or run the lines in `example_usage.py` to do explore it interactively.
## Explanation methods included
`text_explainability` includes methods for model-agnostic _local explanation_ and _global explanation_. Each of these methods can be fully customized to fit the explainees' needs.
| Type | Explanation method | Description | Paper/link |
|------|--------------------|-------------|-------|
| *Local explanation* | `LIME` | Calculate feature attribution with _Local Intepretable Model-Agnostic Explanations_ (LIME). | [[Ribeiro2016](https://paperswithcode.com/method/lime)], [interpretable-ml/lime](https://christophm.github.io/interpretable-ml-book/lime.html) |
| | `KernelSHAP` | Calculate feature attribution with _Shapley Additive Explanations_ (SHAP). | [[Lundberg2017](https://paperswithcode.com/paper/a-unified-approach-to-interpreting-model)], [interpretable-ml/shap](https://christophm.github.io/interpretable-ml-book/shap.html) |
| | `LocalTree` | Fit a local decision tree around a single decision. | [[Guidotti2018](https://paperswithcode.com/paper/local-rule-based-explanations-of-black-box)] |
| | `LocalRules` | Fit a local sparse set of label-specific rules using `SkopeRules`. | [github/skope-rules](https://github.com/scikit-learn-contrib/skope-rules) |
| | `FoilTree` | Fit a local contrastive/counterfactual decision tree around a single decision. | [[Robeer2018](https://github.com/MarcelRobeer/ContrastiveExplanation)] |
| | `BayLIME` | Bayesian extension of LIME for include prior knowledge and more consistent explanations. | [[Zhao201](https://paperswithcode.com/paper/baylime-bayesian-local-interpretable-model)] |
| *Global explanation* | `TokenFrequency` | Show the top-_k_ number of tokens for each ground-truth or predicted label. |
| | `TokenInformation` | Show the top-_k_ token mutual information for a dataset or model. | [wikipedia/mutual_information](https://en.wikipedia.org/wiki/Mutual_information) |
| | `KMedoids` | Embed instances and find top-_n_ prototypes (can also be performed for each label using `LabelwiseKMedoids`). | [interpretable-ml/prototypes](https://christophm.github.io/interpretable-ml-book/proto.html) |
| | `MMDCritic` | Embed instances and find top-_n_ prototypes and top-_n_ criticisms (can also be performed for each label using `LabelwiseMMDCritic`). | [[Kim2016](https://papers.nips.cc/paper/2016/hash/5680522b8e2bb01943234bce7bf84534-Abstract.html)], [interpretable-ml/prototypes](https://christophm.github.io/interpretable-ml-book/proto.html) |
## Releases
`text_explainability` is officially released through [PyPI](https://pypi.org/project/text-explainability/).
See [CHANGELOG.md](CHANGELOG.md) for a full overview of the changes for each version.
## Extensions
<a href="https://marcelrobeer.github.io/text_sensitivity/" target="_blank"><img src="https://git.science.uu.nl/m.j.robeer/text_sensitivity/-/raw/main/img/TextLogo-Logo_large_sensitivity.png" alt="T_xt sensitivity logo" width="200px"></a><p>`text_explainability` can be extended to also perform _sensitivity testing_, checking for machine learning model robustness and fairness. The `text_sensitivity` package is available through [PyPI](https://pypi.org/project/text-sensitivity/) and fully documented at [https://text-sensitivity.rtfd.io/](https://text-sensitivity.rtfd.io/).</p>
## Citation
```bibtex
@misc{text_explainability,
title = {Python package text\_explainability},
author = {Marcel Robeer},
howpublished = {\url{https://git.science.uu.nl/m.j.robeer/text_explainability}},
year = {2021}
}
```
## Maintenance
### Contributors
- [Marcel Robeer](https://www.uu.nl/staff/MJRobeer) (`@m.j.robeer`)
- [Michiel Bron](https://www.uu.nl/staff/MPBron) (`@mpbron-phd`)
### Todo
Tasks yet to be done:
* Implement local post-hoc explanations:
- Implement Anchors
* Implement global post-hoc explanations:
- Representative subset
* Add support for regression models
* More complex data augmentation
- Top-k replacement (e.g. according to LM / WordNet)
- Tokens to exclude from being changed
- Bag-of-words style replacements
* Add rule-based return type
* Write more tests
## Credits
- Florian Gardin, Ronan Gautier, Nicolas Goix, Bibi Ndiaye and Jean-Matthieu Schertzer. _[Skope-rules](https://github.com/scikit-learn-contrib/skope-rules)_. 2020.
- Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Dino Pedreschi, Franco Turini and Fosca Gianotti. _[Local Rule-Based Explanations of Black Box Decision Systems](https://paperswithcode.com/paper/local-rule-based-explanations-of-black-box)_. 2018.
- Been Kim, Rajiv Khanna and Oluwasanmi O. Koyejo. [Examples are not Enough, Learn to Criticize! Criticism for Interpretability](https://papers.nips.cc/paper/2016/hash/5680522b8e2bb01943234bce7bf84534-Abstract.html). _Advances in Neural Information Processing Systems (NIPS 2016)_. 2016.
- Scott Lundberg and Su-In Lee. [A Unified Approach to Interpreting Model Predictions](https://paperswithcode.com/paper/a-unified-approach-to-interpreting-model). _31st Conference on Neural Information Processing Systems (NIPS 2017)_. 2017.
- Christoph Molnar. _[Interpretable Machine Learning: A Guide for Making Black Box Models Explainable](https://christophm.github.io/interpretable-ml-book/)_. 2021.
- Marco Tulio Ribeiro, Sameer Singh and Carlos Guestrin. ["Why Should I Trust You?": Explaining the Predictions of Any Classifier](https://paperswithcode.com/method/lime). _Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2016)_. 2016.
- Marco Tulio Ribeiro, Sameer Singh and Carlos Guestrin. [Anchors: High-Precision Model-Agnostic Explanations](https://github.com/marcotcr/anchor). _AAAI Conference on Artificial Intelligence (AAAI)_. 2018.
- Jasper van der Waa, Marcel Robeer, Jurriaan van Diggelen, Matthieu Brinkhuis and Mark Neerincx. ["Contrastive Explanations with Local Foil Trees"](https://github.com/MarcelRobeer/ContrastiveExplanation). _2018 Workshop on Human Interpretability in Machine Learning (WHI 2018)_. 2018.
%package help
Summary: Development documents and examples for text-explainability
Provides: python3-text-explainability-doc
%description help
`text_explainability` provides a **generic architecture** from which well-known state-of-the-art explainability approaches for text can be composed. This modular architecture allows components to be swapped out and combined, to **quickly develop new types of explainability approaches** for (natural language) text, or to **improve a plethora of approaches by improving a single module**.
Several example methods are included, which provide **local explanations** (_explaining the prediction of a single instance_, e.g. `LIME` and `SHAP`) or **global explanations** (_explaining the dataset, or model behavior on the dataset_, e.g. `TokenFrequency` and `MMDCritic`). By replacing the default modules (e.g. local data generation, global data sampling or improved embedding methods), these methods can be improved upon or new methods can be introduced.
© Marcel Robeer, 2021
## Quick tour
**Local explanation**: explain a models' prediction on a given sample, self-provided or from a dataset.
```python
from text_explainability import LIME, LocalTree
# Define sample to explain
sample = 'Explain why this is positive and not negative!'
# LIME explanation (local feature importance)
LIME().explain(sample, model).scores
# List of local rules, extracted from tree
LocalTree().explain(sample, model).rules
```
**Global explanation**: explain the whole dataset (e.g. train set, test set), and what they look like for the ground-truth or predicted labels.
```python
from text_explainability import import_data, TokenFrequency, MMDCritic
# Import dataset
env = import_data('./datasets/test.csv', data_cols=['fulltext'], label_cols=['label'])
# Top-k most frequent tokens per label
TokenFrequency(env.dataset).explain(labelprovider=env.labels, explain_model=False, k=3)
# 2 prototypes and 1 criticisms for the dataset
MMDCritic(env.dataset)(n_prototypes=2, n_criticisms=1)
```
## Installation
See the [installation](docs/INSTALLATION.md) instructions for an extended installation guide.
| Method | Instructions |
|--------|--------------|
| `pip` | Install from [PyPI](https://pypi.org/project/text-explainability/) via `pip3 install text_explainability`. To speed up the explanation generation process use `pip3 install text_explainability[fast]`. |
| Local | Clone this repository and install via `pip3 install -e .` or locally run `python3 setup.py install`.
## Documentation
Full documentation of the latest version is provided at [https://text-explainability.readthedocs.io/](https://text-explainability.readthedocs.io/).
## Example usage
See [example usage](example_usage.md) to see an example of how the package can be used, or run the lines in `example_usage.py` to do explore it interactively.
## Explanation methods included
`text_explainability` includes methods for model-agnostic _local explanation_ and _global explanation_. Each of these methods can be fully customized to fit the explainees' needs.
| Type | Explanation method | Description | Paper/link |
|------|--------------------|-------------|-------|
| *Local explanation* | `LIME` | Calculate feature attribution with _Local Intepretable Model-Agnostic Explanations_ (LIME). | [[Ribeiro2016](https://paperswithcode.com/method/lime)], [interpretable-ml/lime](https://christophm.github.io/interpretable-ml-book/lime.html) |
| | `KernelSHAP` | Calculate feature attribution with _Shapley Additive Explanations_ (SHAP). | [[Lundberg2017](https://paperswithcode.com/paper/a-unified-approach-to-interpreting-model)], [interpretable-ml/shap](https://christophm.github.io/interpretable-ml-book/shap.html) |
| | `LocalTree` | Fit a local decision tree around a single decision. | [[Guidotti2018](https://paperswithcode.com/paper/local-rule-based-explanations-of-black-box)] |
| | `LocalRules` | Fit a local sparse set of label-specific rules using `SkopeRules`. | [github/skope-rules](https://github.com/scikit-learn-contrib/skope-rules) |
| | `FoilTree` | Fit a local contrastive/counterfactual decision tree around a single decision. | [[Robeer2018](https://github.com/MarcelRobeer/ContrastiveExplanation)] |
| | `BayLIME` | Bayesian extension of LIME for include prior knowledge and more consistent explanations. | [[Zhao201](https://paperswithcode.com/paper/baylime-bayesian-local-interpretable-model)] |
| *Global explanation* | `TokenFrequency` | Show the top-_k_ number of tokens for each ground-truth or predicted label. |
| | `TokenInformation` | Show the top-_k_ token mutual information for a dataset or model. | [wikipedia/mutual_information](https://en.wikipedia.org/wiki/Mutual_information) |
| | `KMedoids` | Embed instances and find top-_n_ prototypes (can also be performed for each label using `LabelwiseKMedoids`). | [interpretable-ml/prototypes](https://christophm.github.io/interpretable-ml-book/proto.html) |
| | `MMDCritic` | Embed instances and find top-_n_ prototypes and top-_n_ criticisms (can also be performed for each label using `LabelwiseMMDCritic`). | [[Kim2016](https://papers.nips.cc/paper/2016/hash/5680522b8e2bb01943234bce7bf84534-Abstract.html)], [interpretable-ml/prototypes](https://christophm.github.io/interpretable-ml-book/proto.html) |
## Releases
`text_explainability` is officially released through [PyPI](https://pypi.org/project/text-explainability/).
See [CHANGELOG.md](CHANGELOG.md) for a full overview of the changes for each version.
## Extensions
<a href="https://marcelrobeer.github.io/text_sensitivity/" target="_blank"><img src="https://git.science.uu.nl/m.j.robeer/text_sensitivity/-/raw/main/img/TextLogo-Logo_large_sensitivity.png" alt="T_xt sensitivity logo" width="200px"></a><p>`text_explainability` can be extended to also perform _sensitivity testing_, checking for machine learning model robustness and fairness. The `text_sensitivity` package is available through [PyPI](https://pypi.org/project/text-sensitivity/) and fully documented at [https://text-sensitivity.rtfd.io/](https://text-sensitivity.rtfd.io/).</p>
## Citation
```bibtex
@misc{text_explainability,
title = {Python package text\_explainability},
author = {Marcel Robeer},
howpublished = {\url{https://git.science.uu.nl/m.j.robeer/text_explainability}},
year = {2021}
}
```
## Maintenance
### Contributors
- [Marcel Robeer](https://www.uu.nl/staff/MJRobeer) (`@m.j.robeer`)
- [Michiel Bron](https://www.uu.nl/staff/MPBron) (`@mpbron-phd`)
### Todo
Tasks yet to be done:
* Implement local post-hoc explanations:
- Implement Anchors
* Implement global post-hoc explanations:
- Representative subset
* Add support for regression models
* More complex data augmentation
- Top-k replacement (e.g. according to LM / WordNet)
- Tokens to exclude from being changed
- Bag-of-words style replacements
* Add rule-based return type
* Write more tests
## Credits
- Florian Gardin, Ronan Gautier, Nicolas Goix, Bibi Ndiaye and Jean-Matthieu Schertzer. _[Skope-rules](https://github.com/scikit-learn-contrib/skope-rules)_. 2020.
- Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Dino Pedreschi, Franco Turini and Fosca Gianotti. _[Local Rule-Based Explanations of Black Box Decision Systems](https://paperswithcode.com/paper/local-rule-based-explanations-of-black-box)_. 2018.
- Been Kim, Rajiv Khanna and Oluwasanmi O. Koyejo. [Examples are not Enough, Learn to Criticize! Criticism for Interpretability](https://papers.nips.cc/paper/2016/hash/5680522b8e2bb01943234bce7bf84534-Abstract.html). _Advances in Neural Information Processing Systems (NIPS 2016)_. 2016.
- Scott Lundberg and Su-In Lee. [A Unified Approach to Interpreting Model Predictions](https://paperswithcode.com/paper/a-unified-approach-to-interpreting-model). _31st Conference on Neural Information Processing Systems (NIPS 2017)_. 2017.
- Christoph Molnar. _[Interpretable Machine Learning: A Guide for Making Black Box Models Explainable](https://christophm.github.io/interpretable-ml-book/)_. 2021.
- Marco Tulio Ribeiro, Sameer Singh and Carlos Guestrin. ["Why Should I Trust You?": Explaining the Predictions of Any Classifier](https://paperswithcode.com/method/lime). _Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2016)_. 2016.
- Marco Tulio Ribeiro, Sameer Singh and Carlos Guestrin. [Anchors: High-Precision Model-Agnostic Explanations](https://github.com/marcotcr/anchor). _AAAI Conference on Artificial Intelligence (AAAI)_. 2018.
- Jasper van der Waa, Marcel Robeer, Jurriaan van Diggelen, Matthieu Brinkhuis and Mark Neerincx. ["Contrastive Explanations with Local Foil Trees"](https://github.com/MarcelRobeer/ContrastiveExplanation). _2018 Workshop on Human Interpretability in Machine Learning (WHI 2018)_. 2018.
%prep
%autosetup -n text-explainability-0.7.0
%build
%py3_build
%install
%py3_install
install -d -m755 %{buildroot}/%{_pkgdocdir}
if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
pushd %{buildroot}
if [ -d usr/lib ]; then
find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/lib64 ]; then
find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/bin ]; then
find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/sbin ]; then
find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
fi
touch doclist.lst
if [ -d usr/share/man ]; then
find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
fi
popd
mv %{buildroot}/filelist.lst .
mv %{buildroot}/doclist.lst .
%files -n python3-text-explainability -f filelist.lst
%dir %{python3_sitelib}/*
%files help -f doclist.lst
%{_docdir}/*
%changelog
* Tue May 30 2023 Python_Bot <Python_Bot@openeuler.org> - 0.7.0-1
- Package Spec generated
|