%global _empty_manifest_terminate_build 0
Name: python-tf-explain
Version: 0.3.1
Release: 1
Summary: Interpretability Callbacks for Tensorflow 2.0
License: MIT
URL: https://github.com/sicara/tf-explain
Source0: https://mirrors.nju.edu.cn/pypi/web/packages/1d/ad/d9467ccd256ff5764f08bdd76da859539b35f01a9059fc8c2e412e6b912a/tf-explain-0.3.1.tar.gz
BuildArch: noarch
Requires: python3-sphinx
Requires: python3-sphinx-rtd-theme
Requires: python3-bumpversion
Requires: python3-twine
Requires: python3-black
Requires: python3-pylint
Requires: python3-pytest
Requires: python3-pytest-timeout
Requires: python3-pytest-mock
Requires: python3-pytest-cov
Requires: python3-tox
%description
# tf-explain
[](https://pypi.org/project/tf-explain/)
[](https://github.com/sicara/tf-explain/actions)
[](https://tf-explain.readthedocs.io/en/latest/?badge=latest)


[](https://github.com/python/black)
__tf-explain__ implements interpretability methods as Tensorflow 2.x callbacks to __ease neural network's understanding__.
See [Introducing tf-explain, Interpretability for Tensorflow 2.0](https://blog.sicara.com/tf-explain-interpretability-tensorflow-2-9438b5846e35)
__Documentation__: https://tf-explain.readthedocs.io
## Installation
__tf-explain__ is available on PyPi as an alpha release. To install it:
```bash
virtualenv venv -p python3.8
pip install tf-explain
```
tf-explain is compatible with Tensorflow 2.x. It is not declared as a dependency
to let you choose between full and standalone-CPU versions. Additionally to the previous install, run:
```bash
# For CPU or GPU
pip install tensorflow==2.6.0
```
Opencv is also a dependency. To install it, run:
```bash
# For CPU or GPU
pip install opencv-python
```
## Quickstart
tf-explain offers 2 ways to apply interpretability methods. The full list of methods is the [Available Methods](#available-methods) section.
### On trained model
The best option is probably to load a trained model and apply the methods on it.
```python
# Load pretrained model or your own
model = tf.keras.applications.vgg16.VGG16(weights="imagenet", include_top=True)
# Load a sample image (or multiple ones)
img = tf.keras.preprocessing.image.load_img(IMAGE_PATH, target_size=(224, 224))
img = tf.keras.preprocessing.image.img_to_array(img)
data = ([img], None)
# Start explainer
explainer = GradCAM()
grid = explainer.explain(data, model, class_index=281) # 281 is the tabby cat index in ImageNet
explainer.save(grid, ".", "grad_cam.png")
```
### During training
If you want to follow your model during the training, you can also use it as a Keras Callback,
and see the results directly in [TensorBoard](https://www.tensorflow.org/tensorboard/).
```python
from tf_explain.callbacks.grad_cam import GradCAMCallback
model = [...]
callbacks = [
GradCAMCallback(
validation_data=(x_val, y_val),
class_index=0,
output_dir=output_dir,
)
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
## Available Methods
1. [Activations Visualization](#activations-visualization)
1. [Vanilla Gradients](#vanilla-gradients)
1. [Gradients*Inputs](#gradients-inputs)
1. [Occlusion Sensitivity](#occlusion-sensitivity)
1. [Grad CAM (Class Activation Maps)](#grad-cam)
1. [SmoothGrad](#smoothgrad)
1. [Integrated Gradients](#integrated-gradients)
### Activations Visualization
> Visualize how a given input comes out of a specific activation layer
```python
from tf_explain.callbacks.activations_visualization import ActivationsVisualizationCallback
model = [...]
callbacks = [
ActivationsVisualizationCallback(
validation_data=(x_val, y_val),
layers_name=["activation_1"],
output_dir=output_dir,
),
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
### Vanilla Gradients
> Visualize gradients importance on input image
```python
from tf_explain.callbacks.vanilla_gradients import VanillaGradientsCallback
model = [...]
callbacks = [
VanillaGradientsCallback(
validation_data=(x_val, y_val),
class_index=0,
output_dir=output_dir,
),
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
### Gradients*Inputs
> Variant of [Vanilla Gradients](#vanilla-gradients) ponderating gradients with input values
```python
from tf_explain.callbacks.gradients_inputs import GradientsInputsCallback
model = [...]
callbacks = [
GradientsInputsCallback(
validation_data=(x_val, y_val),
class_index=0,
output_dir=output_dir,
),
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
### Occlusion Sensitivity
> Visualize how parts of the image affects neural network's confidence by occluding parts iteratively
```python
from tf_explain.callbacks.occlusion_sensitivity import OcclusionSensitivityCallback
model = [...]
callbacks = [
OcclusionSensitivityCallback(
validation_data=(x_val, y_val),
class_index=0,
patch_size=4,
output_dir=output_dir,
),
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
Occlusion Sensitivity for Tabby class (stripes differentiate tabby cat from other ImageNet cat classes)
### Grad CAM
> Visualize how parts of the image affects neural network's output by looking into the activation maps
From [Grad-CAM: Visual Explanations from Deep Networks
via Gradient-based Localization](https://arxiv.org/abs/1610.02391)
```python
from tf_explain.callbacks.grad_cam import GradCAMCallback
model = [...]
callbacks = [
GradCAMCallback(
validation_data=(x_val, y_val),
class_index=0,
output_dir=output_dir,
)
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
### SmoothGrad
> Visualize stabilized gradients on the inputs towards the decision
From [SmoothGrad: removing noise by adding noise](https://arxiv.org/abs/1706.03825)
```python
from tf_explain.callbacks.smoothgrad import SmoothGradCallback
model = [...]
callbacks = [
SmoothGradCallback(
validation_data=(x_val, y_val),
class_index=0,
num_samples=20,
noise=1.,
output_dir=output_dir,
)
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
### Integrated Gradients
> Visualize an average of the gradients along the construction of the input towards the decision
From [Axiomatic Attribution for Deep Networks](https://arxiv.org/pdf/1703.01365.pdf)
```python
from tf_explain.callbacks.integrated_gradients import IntegratedGradientsCallback
model = [...]
callbacks = [
IntegratedGradientsCallback(
validation_data=(x_val, y_val),
class_index=0,
n_steps=20,
output_dir=output_dir,
)
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
## Roadmap
- [ ] Subclassing API Support
- [ ] Additional Methods
- [ ] [GradCAM++](https://arxiv.org/abs/1710.11063)
- [x] [Integrated Gradients](https://arxiv.org/abs/1703.01365)
- [x] [Guided SmoothGrad](https://arxiv.org/abs/1706.03825)
- [ ] [LRP](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130140)
- [ ] Auto-generated API Documentation & Documentation Testing
## Contributing
To contribute to the project, please read the [dedicated section](./CONTRIBUTING.md).
## Citation
A [citation file](./CITATION.cff) is available for citing this work.
%package -n python3-tf-explain
Summary: Interpretability Callbacks for Tensorflow 2.0
Provides: python-tf-explain
BuildRequires: python3-devel
BuildRequires: python3-setuptools
BuildRequires: python3-pip
%description -n python3-tf-explain
# tf-explain
[](https://pypi.org/project/tf-explain/)
[](https://github.com/sicara/tf-explain/actions)
[](https://tf-explain.readthedocs.io/en/latest/?badge=latest)


[](https://github.com/python/black)
__tf-explain__ implements interpretability methods as Tensorflow 2.x callbacks to __ease neural network's understanding__.
See [Introducing tf-explain, Interpretability for Tensorflow 2.0](https://blog.sicara.com/tf-explain-interpretability-tensorflow-2-9438b5846e35)
__Documentation__: https://tf-explain.readthedocs.io
## Installation
__tf-explain__ is available on PyPi as an alpha release. To install it:
```bash
virtualenv venv -p python3.8
pip install tf-explain
```
tf-explain is compatible with Tensorflow 2.x. It is not declared as a dependency
to let you choose between full and standalone-CPU versions. Additionally to the previous install, run:
```bash
# For CPU or GPU
pip install tensorflow==2.6.0
```
Opencv is also a dependency. To install it, run:
```bash
# For CPU or GPU
pip install opencv-python
```
## Quickstart
tf-explain offers 2 ways to apply interpretability methods. The full list of methods is the [Available Methods](#available-methods) section.
### On trained model
The best option is probably to load a trained model and apply the methods on it.
```python
# Load pretrained model or your own
model = tf.keras.applications.vgg16.VGG16(weights="imagenet", include_top=True)
# Load a sample image (or multiple ones)
img = tf.keras.preprocessing.image.load_img(IMAGE_PATH, target_size=(224, 224))
img = tf.keras.preprocessing.image.img_to_array(img)
data = ([img], None)
# Start explainer
explainer = GradCAM()
grid = explainer.explain(data, model, class_index=281) # 281 is the tabby cat index in ImageNet
explainer.save(grid, ".", "grad_cam.png")
```
### During training
If you want to follow your model during the training, you can also use it as a Keras Callback,
and see the results directly in [TensorBoard](https://www.tensorflow.org/tensorboard/).
```python
from tf_explain.callbacks.grad_cam import GradCAMCallback
model = [...]
callbacks = [
GradCAMCallback(
validation_data=(x_val, y_val),
class_index=0,
output_dir=output_dir,
)
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
## Available Methods
1. [Activations Visualization](#activations-visualization)
1. [Vanilla Gradients](#vanilla-gradients)
1. [Gradients*Inputs](#gradients-inputs)
1. [Occlusion Sensitivity](#occlusion-sensitivity)
1. [Grad CAM (Class Activation Maps)](#grad-cam)
1. [SmoothGrad](#smoothgrad)
1. [Integrated Gradients](#integrated-gradients)
### Activations Visualization
> Visualize how a given input comes out of a specific activation layer
```python
from tf_explain.callbacks.activations_visualization import ActivationsVisualizationCallback
model = [...]
callbacks = [
ActivationsVisualizationCallback(
validation_data=(x_val, y_val),
layers_name=["activation_1"],
output_dir=output_dir,
),
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
### Vanilla Gradients
> Visualize gradients importance on input image
```python
from tf_explain.callbacks.vanilla_gradients import VanillaGradientsCallback
model = [...]
callbacks = [
VanillaGradientsCallback(
validation_data=(x_val, y_val),
class_index=0,
output_dir=output_dir,
),
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
### Gradients*Inputs
> Variant of [Vanilla Gradients](#vanilla-gradients) ponderating gradients with input values
```python
from tf_explain.callbacks.gradients_inputs import GradientsInputsCallback
model = [...]
callbacks = [
GradientsInputsCallback(
validation_data=(x_val, y_val),
class_index=0,
output_dir=output_dir,
),
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
### Occlusion Sensitivity
> Visualize how parts of the image affects neural network's confidence by occluding parts iteratively
```python
from tf_explain.callbacks.occlusion_sensitivity import OcclusionSensitivityCallback
model = [...]
callbacks = [
OcclusionSensitivityCallback(
validation_data=(x_val, y_val),
class_index=0,
patch_size=4,
output_dir=output_dir,
),
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
Occlusion Sensitivity for Tabby class (stripes differentiate tabby cat from other ImageNet cat classes)
### Grad CAM
> Visualize how parts of the image affects neural network's output by looking into the activation maps
From [Grad-CAM: Visual Explanations from Deep Networks
via Gradient-based Localization](https://arxiv.org/abs/1610.02391)
```python
from tf_explain.callbacks.grad_cam import GradCAMCallback
model = [...]
callbacks = [
GradCAMCallback(
validation_data=(x_val, y_val),
class_index=0,
output_dir=output_dir,
)
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
### SmoothGrad
> Visualize stabilized gradients on the inputs towards the decision
From [SmoothGrad: removing noise by adding noise](https://arxiv.org/abs/1706.03825)
```python
from tf_explain.callbacks.smoothgrad import SmoothGradCallback
model = [...]
callbacks = [
SmoothGradCallback(
validation_data=(x_val, y_val),
class_index=0,
num_samples=20,
noise=1.,
output_dir=output_dir,
)
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
### Integrated Gradients
> Visualize an average of the gradients along the construction of the input towards the decision
From [Axiomatic Attribution for Deep Networks](https://arxiv.org/pdf/1703.01365.pdf)
```python
from tf_explain.callbacks.integrated_gradients import IntegratedGradientsCallback
model = [...]
callbacks = [
IntegratedGradientsCallback(
validation_data=(x_val, y_val),
class_index=0,
n_steps=20,
output_dir=output_dir,
)
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
## Roadmap
- [ ] Subclassing API Support
- [ ] Additional Methods
- [ ] [GradCAM++](https://arxiv.org/abs/1710.11063)
- [x] [Integrated Gradients](https://arxiv.org/abs/1703.01365)
- [x] [Guided SmoothGrad](https://arxiv.org/abs/1706.03825)
- [ ] [LRP](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130140)
- [ ] Auto-generated API Documentation & Documentation Testing
## Contributing
To contribute to the project, please read the [dedicated section](./CONTRIBUTING.md).
## Citation
A [citation file](./CITATION.cff) is available for citing this work.
%package help
Summary: Development documents and examples for tf-explain
Provides: python3-tf-explain-doc
%description help
# tf-explain
[](https://pypi.org/project/tf-explain/)
[](https://github.com/sicara/tf-explain/actions)
[](https://tf-explain.readthedocs.io/en/latest/?badge=latest)


[](https://github.com/python/black)
__tf-explain__ implements interpretability methods as Tensorflow 2.x callbacks to __ease neural network's understanding__.
See [Introducing tf-explain, Interpretability for Tensorflow 2.0](https://blog.sicara.com/tf-explain-interpretability-tensorflow-2-9438b5846e35)
__Documentation__: https://tf-explain.readthedocs.io
## Installation
__tf-explain__ is available on PyPi as an alpha release. To install it:
```bash
virtualenv venv -p python3.8
pip install tf-explain
```
tf-explain is compatible with Tensorflow 2.x. It is not declared as a dependency
to let you choose between full and standalone-CPU versions. Additionally to the previous install, run:
```bash
# For CPU or GPU
pip install tensorflow==2.6.0
```
Opencv is also a dependency. To install it, run:
```bash
# For CPU or GPU
pip install opencv-python
```
## Quickstart
tf-explain offers 2 ways to apply interpretability methods. The full list of methods is the [Available Methods](#available-methods) section.
### On trained model
The best option is probably to load a trained model and apply the methods on it.
```python
# Load pretrained model or your own
model = tf.keras.applications.vgg16.VGG16(weights="imagenet", include_top=True)
# Load a sample image (or multiple ones)
img = tf.keras.preprocessing.image.load_img(IMAGE_PATH, target_size=(224, 224))
img = tf.keras.preprocessing.image.img_to_array(img)
data = ([img], None)
# Start explainer
explainer = GradCAM()
grid = explainer.explain(data, model, class_index=281) # 281 is the tabby cat index in ImageNet
explainer.save(grid, ".", "grad_cam.png")
```
### During training
If you want to follow your model during the training, you can also use it as a Keras Callback,
and see the results directly in [TensorBoard](https://www.tensorflow.org/tensorboard/).
```python
from tf_explain.callbacks.grad_cam import GradCAMCallback
model = [...]
callbacks = [
GradCAMCallback(
validation_data=(x_val, y_val),
class_index=0,
output_dir=output_dir,
)
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
## Available Methods
1. [Activations Visualization](#activations-visualization)
1. [Vanilla Gradients](#vanilla-gradients)
1. [Gradients*Inputs](#gradients-inputs)
1. [Occlusion Sensitivity](#occlusion-sensitivity)
1. [Grad CAM (Class Activation Maps)](#grad-cam)
1. [SmoothGrad](#smoothgrad)
1. [Integrated Gradients](#integrated-gradients)
### Activations Visualization
> Visualize how a given input comes out of a specific activation layer
```python
from tf_explain.callbacks.activations_visualization import ActivationsVisualizationCallback
model = [...]
callbacks = [
ActivationsVisualizationCallback(
validation_data=(x_val, y_val),
layers_name=["activation_1"],
output_dir=output_dir,
),
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
### Vanilla Gradients
> Visualize gradients importance on input image
```python
from tf_explain.callbacks.vanilla_gradients import VanillaGradientsCallback
model = [...]
callbacks = [
VanillaGradientsCallback(
validation_data=(x_val, y_val),
class_index=0,
output_dir=output_dir,
),
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
### Gradients*Inputs
> Variant of [Vanilla Gradients](#vanilla-gradients) ponderating gradients with input values
```python
from tf_explain.callbacks.gradients_inputs import GradientsInputsCallback
model = [...]
callbacks = [
GradientsInputsCallback(
validation_data=(x_val, y_val),
class_index=0,
output_dir=output_dir,
),
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
### Occlusion Sensitivity
> Visualize how parts of the image affects neural network's confidence by occluding parts iteratively
```python
from tf_explain.callbacks.occlusion_sensitivity import OcclusionSensitivityCallback
model = [...]
callbacks = [
OcclusionSensitivityCallback(
validation_data=(x_val, y_val),
class_index=0,
patch_size=4,
output_dir=output_dir,
),
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
Occlusion Sensitivity for Tabby class (stripes differentiate tabby cat from other ImageNet cat classes)
### Grad CAM
> Visualize how parts of the image affects neural network's output by looking into the activation maps
From [Grad-CAM: Visual Explanations from Deep Networks
via Gradient-based Localization](https://arxiv.org/abs/1610.02391)
```python
from tf_explain.callbacks.grad_cam import GradCAMCallback
model = [...]
callbacks = [
GradCAMCallback(
validation_data=(x_val, y_val),
class_index=0,
output_dir=output_dir,
)
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
### SmoothGrad
> Visualize stabilized gradients on the inputs towards the decision
From [SmoothGrad: removing noise by adding noise](https://arxiv.org/abs/1706.03825)
```python
from tf_explain.callbacks.smoothgrad import SmoothGradCallback
model = [...]
callbacks = [
SmoothGradCallback(
validation_data=(x_val, y_val),
class_index=0,
num_samples=20,
noise=1.,
output_dir=output_dir,
)
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
### Integrated Gradients
> Visualize an average of the gradients along the construction of the input towards the decision
From [Axiomatic Attribution for Deep Networks](https://arxiv.org/pdf/1703.01365.pdf)
```python
from tf_explain.callbacks.integrated_gradients import IntegratedGradientsCallback
model = [...]
callbacks = [
IntegratedGradientsCallback(
validation_data=(x_val, y_val),
class_index=0,
n_steps=20,
output_dir=output_dir,
)
]
model.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)
```
## Roadmap
- [ ] Subclassing API Support
- [ ] Additional Methods
- [ ] [GradCAM++](https://arxiv.org/abs/1710.11063)
- [x] [Integrated Gradients](https://arxiv.org/abs/1703.01365)
- [x] [Guided SmoothGrad](https://arxiv.org/abs/1706.03825)
- [ ] [LRP](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130140)
- [ ] Auto-generated API Documentation & Documentation Testing
## Contributing
To contribute to the project, please read the [dedicated section](./CONTRIBUTING.md).
## Citation
A [citation file](./CITATION.cff) is available for citing this work.
%prep
%autosetup -n tf-explain-0.3.1
%build
%py3_build
%install
%py3_install
install -d -m755 %{buildroot}/%{_pkgdocdir}
if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
pushd %{buildroot}
if [ -d usr/lib ]; then
find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/lib64 ]; then
find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/bin ]; then
find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/sbin ]; then
find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
fi
touch doclist.lst
if [ -d usr/share/man ]; then
find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
fi
popd
mv %{buildroot}/filelist.lst .
mv %{buildroot}/doclist.lst .
%files -n python3-tf-explain -f filelist.lst
%dir %{python3_sitelib}/*
%files help -f doclist.lst
%{_docdir}/*
%changelog
* Wed May 31 2023 Python_Bot - 0.3.1-1
- Package Spec generated