%global _empty_manifest_terminate_build 0 Name: python-alibi-detect Version: 0.11.1 Release: 1 Summary: Algorithms for outlier detection, concept drift and metrics. License: Apache 2.0 URL: https://github.com/SeldonIO/alibi-detect Source0: https://mirrors.nju.edu.cn/pypi/web/packages/f8/b3/7f4cfb302b8f3aa605bdb63b28fe3b0e59825461e05af00061c595859457/alibi-detect-0.11.1.tar.gz BuildArch: noarch Requires: python3-matplotlib Requires: python3-numpy Requires: python3-pandas Requires: python3-Pillow Requires: python3-opencv-python Requires: python3-scipy Requires: python3-scikit-image Requires: python3-scikit-learn Requires: python3-transformers Requires: python3-dill Requires: python3-tqdm Requires: python3-requests Requires: python3-pydantic Requires: python3-toml Requires: python3-catalogue Requires: python3-numba Requires: python3-typing-extensions Requires: python3-prophet Requires: python3-tensorflow-probability Requires: python3-tensorflow Requires: python3-pykeops Requires: python3-torch Requires: python3-pykeops Requires: python3-torch Requires: python3-prophet Requires: python3-tensorflow-probability Requires: python3-tensorflow Requires: python3-torch %description [Alibi Detect](https://github.com/SeldonIO/alibi-detect) is an open source Python library focused on **outlier**, **adversarial** and **drift** detection. The package aims to cover both online and offline detectors for tabular data, text, images and time series. Both **TensorFlow** and **PyTorch** backends are supported for drift detection. * [Documentation](https://docs.seldon.io/projects/alibi-detect/en/stable/) For more background on the importance of monitoring outliers and distributions in a production setting, check out [this talk](https://slideslive.com/38931758/monitoring-and-explainability-of-models-in-production?ref=speaker-37384-latest) from the *Challenges in Deploying and Monitoring Machine Learning Systems* ICML 2020 workshop, based on the paper [Monitoring and explainability of models in production](https://arxiv.org/abs/2007.06299) and referencing Alibi Detect. For a thorough introduction to drift detection, check out [Protecting Your Machine Learning Against Drift: An Introduction](https://youtu.be/tL5sEaQha5o). The talk covers what drift is and why it pays to detect it, the different types of drift, how it can be detected in a principled manner and also describes the anatomy of a drift detector. ## Table of Contents - [Installation and Usage](#installation-and-usage) - [With pip](#with-pip) - [With conda](#with-conda) - [Usage](#usage) - [Supported Algorithms](#supported-algorithms) - [Outlier Detection](#outlier-detection) - [Adversarial Detection](#adversarial-detection) - [Drift Detection](#drift-detection) - [TensorFlow and PyTorch support](#tensorflow-and-pytorch-support) - [Built-in preprocessing steps](#built-in-preprocessing-steps) - [Reference List](#reference-list) - [Outlier Detection](#outlier-detection-1) - [Adversarial Detection](#adversarial-detection-1) - [Drift Detection](#drift-detection-1) - [Datasets](#datasets) - [Sequential Data and Time Series](#sequential-data-and-time-series) - [Images](#images) - [Tabular](#tabular) - [Models](#models) - [Integrations](#integrations) - [Citations](#citations) ## Installation and Usage The package, `alibi-detect` can be installed from: - PyPI or GitHub source (with `pip`) - Anaconda (with `conda`/`mamba`) ### With pip - alibi-detect can be installed from [PyPI](https://pypi.org/project/alibi-detect): ```bash pip install alibi-detect ``` - Alternatively, the development version can be installed: ```bash pip install git+https://github.com/SeldonIO/alibi-detect.git ``` - To install with the TensorFlow backend: ```bash pip install alibi-detect[tensorflow] ``` - To install with the PyTorch backend: ```bash pip install alibi-detect[torch] ``` - To install with the KeOps backend: ```bash pip install alibi-detect[keops] ``` - To use the `Prophet` time series outlier detector: ```bash pip install alibi-detect[prophet] ``` ### With conda To install from [conda-forge](https://conda-forge.org/) it is recommended to use [mamba](https://mamba.readthedocs.io/en/stable/), which can be installed to the *base* conda enviroment with: ```bash conda install mamba -n base -c conda-forge ``` To install alibi-detect: ```bash mamba install -c conda-forge alibi-detect ``` ### Usage We will use the [VAE outlier detector](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/vae.html) to illustrate the API. ```python from alibi_detect.od import OutlierVAE from alibi_detect.saving import save_detector, load_detector # initialize and fit detector od = OutlierVAE(threshold=0.1, encoder_net=encoder_net, decoder_net=decoder_net, latent_dim=1024) od.fit(x_train) # make predictions preds = od.predict(x_test) # save and load detectors filepath = './my_detector/' save_detector(od, filepath) od = load_detector(filepath) ``` The predictions are returned in a dictionary with as keys `meta` and `data`. `meta` contains the detector's metadata while `data` is in itself a dictionary with the actual predictions. It contains the outlier, adversarial or drift scores and thresholds as well as the predictions whether instances are e.g. outliers or not. The exact details can vary slightly from method to method, so we encourage the reader to become familiar with the [types of algorithms supported](https://docs.seldon.io/projects/alibi-detect/en/stable/overview/algorithms.html). ## Supported Algorithms The following tables show the advised use cases for each algorithm. The column *Feature Level* indicates whether the detection can be done at the feature level, e.g. per pixel for an image. Check the [algorithm reference list](#reference-list) for more information with links to the documentation and original papers as well as examples for each of the detectors. ### Outlier Detection | Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level | |:---------------------|:-------:|:-----:|:-----------:|:----:|:--------------------:|:------:|:-------------:| | Isolation Forest | ✔ | | | | ✔ | | | | Mahalanobis Distance | ✔ | | | | ✔ | ✔ | | | AE | ✔ | ✔ | | | | | ✔ | | VAE | ✔ | ✔ | | | | | ✔ | | AEGMM | ✔ | ✔ | | | | | | | VAEGMM | ✔ | ✔ | | | | | | | Likelihood Ratios | ✔ | ✔ | ✔ | | ✔ | | ✔ | | Prophet | | | ✔ | | | | | | Spectral Residual | | | ✔ | | | ✔ | ✔ | | Seq2Seq | | | ✔ | | | | ✔ | ### Adversarial Detection | Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level | | :--- | :---: | :---: |:-----------:|:----:|:--------------------:|:------:|:-------------:| | Adversarial AE | ✔ | ✔ | | | | | | | Model distillation | ✔ | ✔ | ✔ | ✔ | ✔ | | | ### Drift Detection | Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level | |:---------------------------------| :---: | :---: | :---: | :---: | :---: | :---: | :---: | | Kolmogorov-Smirnov | ✔ | ✔ | | ✔ | ✔ | | ✔ | | Cramér-von Mises | ✔ | ✔ | | | | ✔ | ✔ | | Fisher's Exact Test | ✔ | | | | ✔ | ✔ | ✔ | | Maximum Mean Discrepancy (MMD) | ✔ | ✔ | | ✔ | ✔ | ✔ | | | Learned Kernel MMD | ✔ | ✔ | | ✔ | ✔ | | | | Context-aware MMD | ✔ | ✔ | ✔ | ✔ | ✔ | | | | Least-Squares Density Difference | ✔ | ✔ | | ✔ | ✔ | ✔ | | | Chi-Squared | ✔ | | | | ✔ | | ✔ | | Mixed-type tabular data | ✔ | | | | ✔ | | ✔ | | Classifier | ✔ | ✔ | ✔ | ✔ | ✔ | | | | Spot-the-diff | ✔ | ✔ | ✔ | ✔ | ✔ | | ✔ | | Classifier Uncertainty | ✔ | ✔ | ✔ | ✔ | ✔ | | | | Regressor Uncertainty | ✔ | ✔ | ✔ | ✔ | ✔ | | | #### TensorFlow and PyTorch support The drift detectors support TensorFlow, PyTorch and (where applicable) [KeOps](https://www.kernel-operations.io/keops/index.html) backends. However, Alibi Detect does not install these by default. See the [installation options](#installation-and-usage) for more details. ```python from alibi_detect.cd import MMDDrift cd = MMDDrift(x_ref, backend='tensorflow', p_val=.05) preds = cd.predict(x) ``` The same detector in PyTorch: ```python cd = MMDDrift(x_ref, backend='pytorch', p_val=.05) preds = cd.predict(x) ``` Or in KeOps: ```python cd = MMDDrift(x_ref, backend='keops', p_val=.05) preds = cd.predict(x) ``` #### Built-in preprocessing steps Alibi Detect also comes with various preprocessing steps such as randomly initialized encoders, pretrained text embeddings to detect drift on using the [transformers](https://github.com/huggingface/transformers) library and extraction of hidden layers from machine learning models. This allows to detect different types of drift such as **covariate and predicted distribution shift**. The preprocessing steps are again supported in TensorFlow and PyTorch. ```python from alibi_detect.cd.tensorflow import HiddenOutput, preprocess_drift model = # TensorFlow model; tf.keras.Model or tf.keras.Sequential preprocess_fn = partial(preprocess_drift, model=HiddenOutput(model, layer=-1), batch_size=128) cd = MMDDrift(x_ref, backend='tensorflow', p_val=.05, preprocess_fn=preprocess_fn) preds = cd.predict(x) ``` Check the example notebooks (e.g. [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_mmd_cifar10.html), [movie reviews](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_text_imdb.html)) for more details. ### Reference List #### Outlier Detection - [Isolation Forest](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/iforest.html) ([FT Liu et al., 2008](https://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/icdm08b.pdf)) - Example: [Network Intrusion](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_if_kddcup.html) - [Mahalanobis Distance](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/mahalanobis.html) ([Mahalanobis, 1936](https://insa.nic.in/writereaddata/UpLoadedFiles/PINSA/Vol02_1936_1_Art05.pdf)) - Example: [Network Intrusion](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_mahalanobis_kddcup.html) - [Auto-Encoder (AE)](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/ae.html) - Example: [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_ae_cifar10.html) - [Variational Auto-Encoder (VAE)](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/vae.html) ([Kingma et al., 2013](https://arxiv.org/abs/1312.6114)) - Examples: [Network Intrusion](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_vae_kddcup.html), [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_vae_cifar10.html) - [Auto-Encoding Gaussian Mixture Model (AEGMM)](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/aegmm.html) ([Zong et al., 2018](https://openreview.net/forum?id=BJJLHbb0-)) - Example: [Network Intrusion](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_aegmm_kddcup.html) - [Variational Auto-Encoding Gaussian Mixture Model (VAEGMM)](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/vaegmm.html) - Example: [Network Intrusion](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_aegmm_kddcup.html) - [Likelihood Ratios](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/llr.html) ([Ren et al., 2019](https://arxiv.org/abs/1906.02845)) - Examples: [Genome](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_llr_genome.html), [Fashion-MNIST vs. MNIST](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_llr_mnist.html) - [Prophet Time Series Outlier Detector](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/prophet.html) ([Taylor et al., 2018](https://peerj.com/preprints/3190/)) - Example: [Weather Forecast](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_prophet_weather.html) - [Spectral Residual Time Series Outlier Detector](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/sr.html) ([Ren et al., 2019](https://arxiv.org/abs/1906.03821)) - Example: [Synthetic Dataset](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_sr_synth.html) - [Sequence-to-Sequence (Seq2Seq) Outlier Detector](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/seq2seq.html) ([Sutskever et al., 2014](https://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf); [Park et al., 2017](https://arxiv.org/pdf/1711.00614.pdf)) - Examples: [ECG](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_seq2seq_ecg.html), [Synthetic Dataset](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_seq2seq_synth.html) #### Adversarial Detection - [Adversarial Auto-Encoder](https://docs.seldon.io/projects/alibi-detect/en/stable/ad/methods/adversarialae.html) ([Vacanti and Van Looveren, 2020](https://arxiv.org/abs/2002.09364)) - Example: [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/ad_ae_cifar10.html) - [Model distillation](https://docs.seldon.io/projects/alibi-detect/en/stable/ad/methods/modeldistillation.html) - Example: [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_distillation_cifar10.html) #### Drift Detection - [Kolmogorov-Smirnov](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/ksdrift.html) - Example: [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_ks_cifar10.html), [molecular graphs](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_mol.html), [movie reviews](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_text_imdb.html) - [Cramér-von Mises](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/cvmdrift.html) - Example: [Penguins](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_supervised_penguins.html) - [Fisher's Exact Test](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/fetdrift.html) - Example: [Penguins](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_supervised_penguins.html) - [Least-Squares Density Difference](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/lsdddrift.html) ([Bu et al, 2016](https://alippi.faculty.polimi.it/articoli/A%20Pdf%20free%20Change%20Detection%20Test%20Based%20on%20Density%20Difference%20Estimation.pdf)) - [Maximum Mean Discrepancy](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/mmddrift.html) ([Gretton et al, 2012](http://jmlr.csail.mit.edu/papers/v13/gretton12a.html)) - Example: [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_mmd_cifar10.html), [molecular graphs](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_mol.html), [movie reviews](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_text_imdb.html), [Amazon reviews](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_text_amazon.html) - [Learned Kernel MMD](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/learnedkerneldrift.html) ([Liu et al, 2020](https://arxiv.org/abs/2002.09116)) - Example: [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_clf_cifar10.html) - [Context-aware MMD](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/contextmmddrift.html) ([Cobb and Van Looveren, 2022](https://arxiv.org/abs/2203.08644)) - Example: [ECG](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_context_ecg.html), [news topics](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_context_20newsgroup.html) - [Chi-Squared](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/chisquaredrift.html) - Example: [Income Prediction](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_chi2ks_adult.html) - [Mixed-type tabular data](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/tabulardrift.html) - Example: [Income Prediction](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_chi2ks_adult.html) - [Classifier](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/classifierdrift.html) ([Lopez-Paz and Oquab, 2017](https://openreview.net/forum?id=SJkXfE5xx)) - Example: [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_clf_cifar10.html), [Amazon reviews](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_text_amazon.html) - [Spot-the-diff](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/spotthediffdrift.html) (adaptation of [Jitkrittum et al, 2016](https://arxiv.org/abs/1605.06796)) - Example [MNIST and Wine quality](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/spot_the_diff_mnist_win.html) - [Classifier and Regressor Uncertainty](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/modeluncdrift.html) - Example: [CIFAR10 and Wine](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_model_unc_cifar10_wine.html), [molecular graphs](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_mol.html) - [Online Maximum Mean Discrepancy](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/onlinemmddrift.html) - Example: [Wine Quality](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_online_wine.html), [Camelyon medical imaging](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_online_camelyon.html) - [Online Least-Squares Density Difference](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/onlinemmddrift.html) ([Bu et al, 2017](https://ieeexplore.ieee.org/abstract/document/7890493)) - Example: [Wine Quality](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_online_wine.html) ## Datasets The package also contains functionality in `alibi_detect.datasets` to easily fetch a number of datasets for different modalities. For each dataset either the data and labels or a *Bunch* object with the data, labels and optional metadata are returned. Example: ```python from alibi_detect.datasets import fetch_ecg (X_train, y_train), (X_test, y_test) = fetch_ecg(return_X_y=True) ``` ### Sequential Data and Time Series - **Genome Dataset**: `fetch_genome` - Bacteria genomics dataset for out-of-distribution detection, released as part of [Likelihood Ratios for Out-of-Distribution Detection](https://arxiv.org/abs/1906.02845). From the original *TL;DR*: *The dataset contains genomic sequences of 250 base pairs from 10 in-distribution bacteria classes for training, 60 OOD bacteria classes for validation, and another 60 different OOD bacteria classes for test*. There are respectively 1, 7 and again 7 million sequences in the training, validation and test sets. For detailed info on the dataset check the [README](https://storage.cloud.google.com/seldon-datasets/genome/readme.docx?organizationId=156002945562). ```python from alibi_detect.datasets import fetch_genome (X_train, y_train), (X_val, y_val), (X_test, y_test) = fetch_genome(return_X_y=True) ``` - **ECG 5000**: `fetch_ecg` - 5000 ECG's, originally obtained from [Physionet](https://archive.physionet.org/cgi-bin/atm/ATM). - **NAB**: `fetch_nab` - Any univariate time series in a DataFrame from the [Numenta Anomaly Benchmark](https://github.com/numenta/NAB). A list with the available time series can be retrieved using `alibi_detect.datasets.get_list_nab()`. ### Images - **CIFAR-10-C**: `fetch_cifar10c` - CIFAR-10-C ([Hendrycks & Dietterich, 2019](https://arxiv.org/abs/1903.12261)) contains the test set of CIFAR-10, but corrupted and perturbed by various types of noise, blur, brightness etc. at different levels of severity, leading to a gradual decline in a classification model's performance trained on CIFAR-10. `fetch_cifar10c` allows you to pick any severity level or corruption type. The list with available corruption types can be retrieved with `alibi_detect.datasets.corruption_types_cifar10c()`. The dataset can be used in research on robustness and drift. The original data can be found [here](https://zenodo.org/record/2535967#.XnAM2nX7RNw). Example: ```python from alibi_detect.datasets import fetch_cifar10c corruption = ['gaussian_noise', 'motion_blur', 'brightness', 'pixelate'] X, y = fetch_cifar10c(corruption=corruption, severity=5, return_X_y=True) ``` - **Adversarial CIFAR-10**: `fetch_attack` - Load adversarial instances on a ResNet-56 classifier trained on CIFAR-10. Available attacks: [Carlini-Wagner](https://arxiv.org/abs/1608.04644) ('cw') and [SLIDE](https://arxiv.org/abs/1904.13000) ('slide'). Example: ```python from alibi_detect.datasets import fetch_attack (X_train, y_train), (X_test, y_test) = fetch_attack('cifar10', 'resnet56', 'cw', return_X_y=True) ``` ### Tabular - **KDD Cup '99**: `fetch_kdd` - Dataset with different types of computer network intrusions. `fetch_kdd` allows you to select a subset of network intrusions as targets or pick only specified features. The original data can be found [here](http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html). ## Models Models and/or building blocks that can be useful outside of outlier, adversarial or drift detection can be found under `alibi_detect.models`. Main implementations: - [PixelCNN++](https://arxiv.org/abs/1701.05517): `alibi_detect.models.pixelcnn.PixelCNN` - Variational Autoencoder: `alibi_detect.models.autoencoder.VAE` - Sequence-to-sequence model: `alibi_detect.models.autoencoder.Seq2Seq` - ResNet: `alibi_detect.models.resnet` - Pre-trained ResNet-20/32/44 models on CIFAR-10 can be found on our [Google Cloud Bucket](https://console.cloud.google.com/storage/browser/seldon-models/alibi-detect/classifier/cifar10/?organizationId=156002945562&project=seldon-pub) and can be fetched as follows: ```python from alibi_detect.utils.fetching import fetch_tf_model model = fetch_tf_model('cifar10', 'resnet32') ``` ## Integrations Alibi-detect is integrated in the open source machine learning model deployment platform [Seldon Core](https://docs.seldon.io/projects/seldon-core/en/stable/index.html) and model serving framework [KFServing](https://github.com/kubeflow/kfserving). - **Seldon Core**: [outlier](https://docs.seldon.io/projects/seldon-core/en/stable/analytics/outlier_detection.html) and [drift](https://docs.seldon.io/projects/seldon-core/en/stable/analytics/drift_detection.html) detection worked examples. - **KFServing**: [outlier](https://github.com/kubeflow/kfserving/tree/master/docs/samples/outlier-detection/alibi-detect/cifar10) and [drift](https://github.com/kubeflow/kfserving/tree/master/docs/samples/drift-detection/alibi-detect/cifar10) detection examples. ## Citations If you use alibi-detect in your research, please consider citing it. BibTeX entry: ``` @software{alibi-detect, title = {Alibi Detect: Algorithms for outlier, adversarial and drift detection}, author = {Van Looveren, Arnaud and Klaise, Janis and Vacanti, Giovanni and Cobb, Oliver and Scillitoe, Ashley and Samoilescu, Robert and Athorne, Alex}, url = {https://github.com/SeldonIO/alibi-detect}, version = {0.11.1}, date = {2023-03-03}, year = {2019} } ``` %package -n python3-alibi-detect Summary: Algorithms for outlier detection, concept drift and metrics. Provides: python-alibi-detect BuildRequires: python3-devel BuildRequires: python3-setuptools BuildRequires: python3-pip %description -n python3-alibi-detect [Alibi Detect](https://github.com/SeldonIO/alibi-detect) is an open source Python library focused on **outlier**, **adversarial** and **drift** detection. The package aims to cover both online and offline detectors for tabular data, text, images and time series. Both **TensorFlow** and **PyTorch** backends are supported for drift detection. * [Documentation](https://docs.seldon.io/projects/alibi-detect/en/stable/) For more background on the importance of monitoring outliers and distributions in a production setting, check out [this talk](https://slideslive.com/38931758/monitoring-and-explainability-of-models-in-production?ref=speaker-37384-latest) from the *Challenges in Deploying and Monitoring Machine Learning Systems* ICML 2020 workshop, based on the paper [Monitoring and explainability of models in production](https://arxiv.org/abs/2007.06299) and referencing Alibi Detect. For a thorough introduction to drift detection, check out [Protecting Your Machine Learning Against Drift: An Introduction](https://youtu.be/tL5sEaQha5o). The talk covers what drift is and why it pays to detect it, the different types of drift, how it can be detected in a principled manner and also describes the anatomy of a drift detector. ## Table of Contents - [Installation and Usage](#installation-and-usage) - [With pip](#with-pip) - [With conda](#with-conda) - [Usage](#usage) - [Supported Algorithms](#supported-algorithms) - [Outlier Detection](#outlier-detection) - [Adversarial Detection](#adversarial-detection) - [Drift Detection](#drift-detection) - [TensorFlow and PyTorch support](#tensorflow-and-pytorch-support) - [Built-in preprocessing steps](#built-in-preprocessing-steps) - [Reference List](#reference-list) - [Outlier Detection](#outlier-detection-1) - [Adversarial Detection](#adversarial-detection-1) - [Drift Detection](#drift-detection-1) - [Datasets](#datasets) - [Sequential Data and Time Series](#sequential-data-and-time-series) - [Images](#images) - [Tabular](#tabular) - [Models](#models) - [Integrations](#integrations) - [Citations](#citations) ## Installation and Usage The package, `alibi-detect` can be installed from: - PyPI or GitHub source (with `pip`) - Anaconda (with `conda`/`mamba`) ### With pip - alibi-detect can be installed from [PyPI](https://pypi.org/project/alibi-detect): ```bash pip install alibi-detect ``` - Alternatively, the development version can be installed: ```bash pip install git+https://github.com/SeldonIO/alibi-detect.git ``` - To install with the TensorFlow backend: ```bash pip install alibi-detect[tensorflow] ``` - To install with the PyTorch backend: ```bash pip install alibi-detect[torch] ``` - To install with the KeOps backend: ```bash pip install alibi-detect[keops] ``` - To use the `Prophet` time series outlier detector: ```bash pip install alibi-detect[prophet] ``` ### With conda To install from [conda-forge](https://conda-forge.org/) it is recommended to use [mamba](https://mamba.readthedocs.io/en/stable/), which can be installed to the *base* conda enviroment with: ```bash conda install mamba -n base -c conda-forge ``` To install alibi-detect: ```bash mamba install -c conda-forge alibi-detect ``` ### Usage We will use the [VAE outlier detector](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/vae.html) to illustrate the API. ```python from alibi_detect.od import OutlierVAE from alibi_detect.saving import save_detector, load_detector # initialize and fit detector od = OutlierVAE(threshold=0.1, encoder_net=encoder_net, decoder_net=decoder_net, latent_dim=1024) od.fit(x_train) # make predictions preds = od.predict(x_test) # save and load detectors filepath = './my_detector/' save_detector(od, filepath) od = load_detector(filepath) ``` The predictions are returned in a dictionary with as keys `meta` and `data`. `meta` contains the detector's metadata while `data` is in itself a dictionary with the actual predictions. It contains the outlier, adversarial or drift scores and thresholds as well as the predictions whether instances are e.g. outliers or not. The exact details can vary slightly from method to method, so we encourage the reader to become familiar with the [types of algorithms supported](https://docs.seldon.io/projects/alibi-detect/en/stable/overview/algorithms.html). ## Supported Algorithms The following tables show the advised use cases for each algorithm. The column *Feature Level* indicates whether the detection can be done at the feature level, e.g. per pixel for an image. Check the [algorithm reference list](#reference-list) for more information with links to the documentation and original papers as well as examples for each of the detectors. ### Outlier Detection | Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level | |:---------------------|:-------:|:-----:|:-----------:|:----:|:--------------------:|:------:|:-------------:| | Isolation Forest | ✔ | | | | ✔ | | | | Mahalanobis Distance | ✔ | | | | ✔ | ✔ | | | AE | ✔ | ✔ | | | | | ✔ | | VAE | ✔ | ✔ | | | | | ✔ | | AEGMM | ✔ | ✔ | | | | | | | VAEGMM | ✔ | ✔ | | | | | | | Likelihood Ratios | ✔ | ✔ | ✔ | | ✔ | | ✔ | | Prophet | | | ✔ | | | | | | Spectral Residual | | | ✔ | | | ✔ | ✔ | | Seq2Seq | | | ✔ | | | | ✔ | ### Adversarial Detection | Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level | | :--- | :---: | :---: |:-----------:|:----:|:--------------------:|:------:|:-------------:| | Adversarial AE | ✔ | ✔ | | | | | | | Model distillation | ✔ | ✔ | ✔ | ✔ | ✔ | | | ### Drift Detection | Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level | |:---------------------------------| :---: | :---: | :---: | :---: | :---: | :---: | :---: | | Kolmogorov-Smirnov | ✔ | ✔ | | ✔ | ✔ | | ✔ | | Cramér-von Mises | ✔ | ✔ | | | | ✔ | ✔ | | Fisher's Exact Test | ✔ | | | | ✔ | ✔ | ✔ | | Maximum Mean Discrepancy (MMD) | ✔ | ✔ | | ✔ | ✔ | ✔ | | | Learned Kernel MMD | ✔ | ✔ | | ✔ | ✔ | | | | Context-aware MMD | ✔ | ✔ | ✔ | ✔ | ✔ | | | | Least-Squares Density Difference | ✔ | ✔ | | ✔ | ✔ | ✔ | | | Chi-Squared | ✔ | | | | ✔ | | ✔ | | Mixed-type tabular data | ✔ | | | | ✔ | | ✔ | | Classifier | ✔ | ✔ | ✔ | ✔ | ✔ | | | | Spot-the-diff | ✔ | ✔ | ✔ | ✔ | ✔ | | ✔ | | Classifier Uncertainty | ✔ | ✔ | ✔ | ✔ | ✔ | | | | Regressor Uncertainty | ✔ | ✔ | ✔ | ✔ | ✔ | | | #### TensorFlow and PyTorch support The drift detectors support TensorFlow, PyTorch and (where applicable) [KeOps](https://www.kernel-operations.io/keops/index.html) backends. However, Alibi Detect does not install these by default. See the [installation options](#installation-and-usage) for more details. ```python from alibi_detect.cd import MMDDrift cd = MMDDrift(x_ref, backend='tensorflow', p_val=.05) preds = cd.predict(x) ``` The same detector in PyTorch: ```python cd = MMDDrift(x_ref, backend='pytorch', p_val=.05) preds = cd.predict(x) ``` Or in KeOps: ```python cd = MMDDrift(x_ref, backend='keops', p_val=.05) preds = cd.predict(x) ``` #### Built-in preprocessing steps Alibi Detect also comes with various preprocessing steps such as randomly initialized encoders, pretrained text embeddings to detect drift on using the [transformers](https://github.com/huggingface/transformers) library and extraction of hidden layers from machine learning models. This allows to detect different types of drift such as **covariate and predicted distribution shift**. The preprocessing steps are again supported in TensorFlow and PyTorch. ```python from alibi_detect.cd.tensorflow import HiddenOutput, preprocess_drift model = # TensorFlow model; tf.keras.Model or tf.keras.Sequential preprocess_fn = partial(preprocess_drift, model=HiddenOutput(model, layer=-1), batch_size=128) cd = MMDDrift(x_ref, backend='tensorflow', p_val=.05, preprocess_fn=preprocess_fn) preds = cd.predict(x) ``` Check the example notebooks (e.g. [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_mmd_cifar10.html), [movie reviews](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_text_imdb.html)) for more details. ### Reference List #### Outlier Detection - [Isolation Forest](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/iforest.html) ([FT Liu et al., 2008](https://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/icdm08b.pdf)) - Example: [Network Intrusion](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_if_kddcup.html) - [Mahalanobis Distance](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/mahalanobis.html) ([Mahalanobis, 1936](https://insa.nic.in/writereaddata/UpLoadedFiles/PINSA/Vol02_1936_1_Art05.pdf)) - Example: [Network Intrusion](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_mahalanobis_kddcup.html) - [Auto-Encoder (AE)](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/ae.html) - Example: [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_ae_cifar10.html) - [Variational Auto-Encoder (VAE)](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/vae.html) ([Kingma et al., 2013](https://arxiv.org/abs/1312.6114)) - Examples: [Network Intrusion](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_vae_kddcup.html), [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_vae_cifar10.html) - [Auto-Encoding Gaussian Mixture Model (AEGMM)](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/aegmm.html) ([Zong et al., 2018](https://openreview.net/forum?id=BJJLHbb0-)) - Example: [Network Intrusion](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_aegmm_kddcup.html) - [Variational Auto-Encoding Gaussian Mixture Model (VAEGMM)](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/vaegmm.html) - Example: [Network Intrusion](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_aegmm_kddcup.html) - [Likelihood Ratios](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/llr.html) ([Ren et al., 2019](https://arxiv.org/abs/1906.02845)) - Examples: [Genome](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_llr_genome.html), [Fashion-MNIST vs. MNIST](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_llr_mnist.html) - [Prophet Time Series Outlier Detector](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/prophet.html) ([Taylor et al., 2018](https://peerj.com/preprints/3190/)) - Example: [Weather Forecast](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_prophet_weather.html) - [Spectral Residual Time Series Outlier Detector](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/sr.html) ([Ren et al., 2019](https://arxiv.org/abs/1906.03821)) - Example: [Synthetic Dataset](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_sr_synth.html) - [Sequence-to-Sequence (Seq2Seq) Outlier Detector](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/seq2seq.html) ([Sutskever et al., 2014](https://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf); [Park et al., 2017](https://arxiv.org/pdf/1711.00614.pdf)) - Examples: [ECG](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_seq2seq_ecg.html), [Synthetic Dataset](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_seq2seq_synth.html) #### Adversarial Detection - [Adversarial Auto-Encoder](https://docs.seldon.io/projects/alibi-detect/en/stable/ad/methods/adversarialae.html) ([Vacanti and Van Looveren, 2020](https://arxiv.org/abs/2002.09364)) - Example: [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/ad_ae_cifar10.html) - [Model distillation](https://docs.seldon.io/projects/alibi-detect/en/stable/ad/methods/modeldistillation.html) - Example: [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_distillation_cifar10.html) #### Drift Detection - [Kolmogorov-Smirnov](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/ksdrift.html) - Example: [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_ks_cifar10.html), [molecular graphs](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_mol.html), [movie reviews](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_text_imdb.html) - [Cramér-von Mises](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/cvmdrift.html) - Example: [Penguins](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_supervised_penguins.html) - [Fisher's Exact Test](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/fetdrift.html) - Example: [Penguins](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_supervised_penguins.html) - [Least-Squares Density Difference](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/lsdddrift.html) ([Bu et al, 2016](https://alippi.faculty.polimi.it/articoli/A%20Pdf%20free%20Change%20Detection%20Test%20Based%20on%20Density%20Difference%20Estimation.pdf)) - [Maximum Mean Discrepancy](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/mmddrift.html) ([Gretton et al, 2012](http://jmlr.csail.mit.edu/papers/v13/gretton12a.html)) - Example: [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_mmd_cifar10.html), [molecular graphs](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_mol.html), [movie reviews](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_text_imdb.html), [Amazon reviews](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_text_amazon.html) - [Learned Kernel MMD](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/learnedkerneldrift.html) ([Liu et al, 2020](https://arxiv.org/abs/2002.09116)) - Example: [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_clf_cifar10.html) - [Context-aware MMD](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/contextmmddrift.html) ([Cobb and Van Looveren, 2022](https://arxiv.org/abs/2203.08644)) - Example: [ECG](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_context_ecg.html), [news topics](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_context_20newsgroup.html) - [Chi-Squared](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/chisquaredrift.html) - Example: [Income Prediction](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_chi2ks_adult.html) - [Mixed-type tabular data](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/tabulardrift.html) - Example: [Income Prediction](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_chi2ks_adult.html) - [Classifier](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/classifierdrift.html) ([Lopez-Paz and Oquab, 2017](https://openreview.net/forum?id=SJkXfE5xx)) - Example: [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_clf_cifar10.html), [Amazon reviews](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_text_amazon.html) - [Spot-the-diff](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/spotthediffdrift.html) (adaptation of [Jitkrittum et al, 2016](https://arxiv.org/abs/1605.06796)) - Example [MNIST and Wine quality](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/spot_the_diff_mnist_win.html) - [Classifier and Regressor Uncertainty](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/modeluncdrift.html) - Example: [CIFAR10 and Wine](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_model_unc_cifar10_wine.html), [molecular graphs](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_mol.html) - [Online Maximum Mean Discrepancy](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/onlinemmddrift.html) - Example: [Wine Quality](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_online_wine.html), [Camelyon medical imaging](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_online_camelyon.html) - [Online Least-Squares Density Difference](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/onlinemmddrift.html) ([Bu et al, 2017](https://ieeexplore.ieee.org/abstract/document/7890493)) - Example: [Wine Quality](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_online_wine.html) ## Datasets The package also contains functionality in `alibi_detect.datasets` to easily fetch a number of datasets for different modalities. For each dataset either the data and labels or a *Bunch* object with the data, labels and optional metadata are returned. Example: ```python from alibi_detect.datasets import fetch_ecg (X_train, y_train), (X_test, y_test) = fetch_ecg(return_X_y=True) ``` ### Sequential Data and Time Series - **Genome Dataset**: `fetch_genome` - Bacteria genomics dataset for out-of-distribution detection, released as part of [Likelihood Ratios for Out-of-Distribution Detection](https://arxiv.org/abs/1906.02845). From the original *TL;DR*: *The dataset contains genomic sequences of 250 base pairs from 10 in-distribution bacteria classes for training, 60 OOD bacteria classes for validation, and another 60 different OOD bacteria classes for test*. There are respectively 1, 7 and again 7 million sequences in the training, validation and test sets. For detailed info on the dataset check the [README](https://storage.cloud.google.com/seldon-datasets/genome/readme.docx?organizationId=156002945562). ```python from alibi_detect.datasets import fetch_genome (X_train, y_train), (X_val, y_val), (X_test, y_test) = fetch_genome(return_X_y=True) ``` - **ECG 5000**: `fetch_ecg` - 5000 ECG's, originally obtained from [Physionet](https://archive.physionet.org/cgi-bin/atm/ATM). - **NAB**: `fetch_nab` - Any univariate time series in a DataFrame from the [Numenta Anomaly Benchmark](https://github.com/numenta/NAB). A list with the available time series can be retrieved using `alibi_detect.datasets.get_list_nab()`. ### Images - **CIFAR-10-C**: `fetch_cifar10c` - CIFAR-10-C ([Hendrycks & Dietterich, 2019](https://arxiv.org/abs/1903.12261)) contains the test set of CIFAR-10, but corrupted and perturbed by various types of noise, blur, brightness etc. at different levels of severity, leading to a gradual decline in a classification model's performance trained on CIFAR-10. `fetch_cifar10c` allows you to pick any severity level or corruption type. The list with available corruption types can be retrieved with `alibi_detect.datasets.corruption_types_cifar10c()`. The dataset can be used in research on robustness and drift. The original data can be found [here](https://zenodo.org/record/2535967#.XnAM2nX7RNw). Example: ```python from alibi_detect.datasets import fetch_cifar10c corruption = ['gaussian_noise', 'motion_blur', 'brightness', 'pixelate'] X, y = fetch_cifar10c(corruption=corruption, severity=5, return_X_y=True) ``` - **Adversarial CIFAR-10**: `fetch_attack` - Load adversarial instances on a ResNet-56 classifier trained on CIFAR-10. Available attacks: [Carlini-Wagner](https://arxiv.org/abs/1608.04644) ('cw') and [SLIDE](https://arxiv.org/abs/1904.13000) ('slide'). Example: ```python from alibi_detect.datasets import fetch_attack (X_train, y_train), (X_test, y_test) = fetch_attack('cifar10', 'resnet56', 'cw', return_X_y=True) ``` ### Tabular - **KDD Cup '99**: `fetch_kdd` - Dataset with different types of computer network intrusions. `fetch_kdd` allows you to select a subset of network intrusions as targets or pick only specified features. The original data can be found [here](http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html). ## Models Models and/or building blocks that can be useful outside of outlier, adversarial or drift detection can be found under `alibi_detect.models`. Main implementations: - [PixelCNN++](https://arxiv.org/abs/1701.05517): `alibi_detect.models.pixelcnn.PixelCNN` - Variational Autoencoder: `alibi_detect.models.autoencoder.VAE` - Sequence-to-sequence model: `alibi_detect.models.autoencoder.Seq2Seq` - ResNet: `alibi_detect.models.resnet` - Pre-trained ResNet-20/32/44 models on CIFAR-10 can be found on our [Google Cloud Bucket](https://console.cloud.google.com/storage/browser/seldon-models/alibi-detect/classifier/cifar10/?organizationId=156002945562&project=seldon-pub) and can be fetched as follows: ```python from alibi_detect.utils.fetching import fetch_tf_model model = fetch_tf_model('cifar10', 'resnet32') ``` ## Integrations Alibi-detect is integrated in the open source machine learning model deployment platform [Seldon Core](https://docs.seldon.io/projects/seldon-core/en/stable/index.html) and model serving framework [KFServing](https://github.com/kubeflow/kfserving). - **Seldon Core**: [outlier](https://docs.seldon.io/projects/seldon-core/en/stable/analytics/outlier_detection.html) and [drift](https://docs.seldon.io/projects/seldon-core/en/stable/analytics/drift_detection.html) detection worked examples. - **KFServing**: [outlier](https://github.com/kubeflow/kfserving/tree/master/docs/samples/outlier-detection/alibi-detect/cifar10) and [drift](https://github.com/kubeflow/kfserving/tree/master/docs/samples/drift-detection/alibi-detect/cifar10) detection examples. ## Citations If you use alibi-detect in your research, please consider citing it. BibTeX entry: ``` @software{alibi-detect, title = {Alibi Detect: Algorithms for outlier, adversarial and drift detection}, author = {Van Looveren, Arnaud and Klaise, Janis and Vacanti, Giovanni and Cobb, Oliver and Scillitoe, Ashley and Samoilescu, Robert and Athorne, Alex}, url = {https://github.com/SeldonIO/alibi-detect}, version = {0.11.1}, date = {2023-03-03}, year = {2019} } ``` %package help Summary: Development documents and examples for alibi-detect Provides: python3-alibi-detect-doc %description help [Alibi Detect](https://github.com/SeldonIO/alibi-detect) is an open source Python library focused on **outlier**, **adversarial** and **drift** detection. The package aims to cover both online and offline detectors for tabular data, text, images and time series. Both **TensorFlow** and **PyTorch** backends are supported for drift detection. * [Documentation](https://docs.seldon.io/projects/alibi-detect/en/stable/) For more background on the importance of monitoring outliers and distributions in a production setting, check out [this talk](https://slideslive.com/38931758/monitoring-and-explainability-of-models-in-production?ref=speaker-37384-latest) from the *Challenges in Deploying and Monitoring Machine Learning Systems* ICML 2020 workshop, based on the paper [Monitoring and explainability of models in production](https://arxiv.org/abs/2007.06299) and referencing Alibi Detect. For a thorough introduction to drift detection, check out [Protecting Your Machine Learning Against Drift: An Introduction](https://youtu.be/tL5sEaQha5o). The talk covers what drift is and why it pays to detect it, the different types of drift, how it can be detected in a principled manner and also describes the anatomy of a drift detector. ## Table of Contents - [Installation and Usage](#installation-and-usage) - [With pip](#with-pip) - [With conda](#with-conda) - [Usage](#usage) - [Supported Algorithms](#supported-algorithms) - [Outlier Detection](#outlier-detection) - [Adversarial Detection](#adversarial-detection) - [Drift Detection](#drift-detection) - [TensorFlow and PyTorch support](#tensorflow-and-pytorch-support) - [Built-in preprocessing steps](#built-in-preprocessing-steps) - [Reference List](#reference-list) - [Outlier Detection](#outlier-detection-1) - [Adversarial Detection](#adversarial-detection-1) - [Drift Detection](#drift-detection-1) - [Datasets](#datasets) - [Sequential Data and Time Series](#sequential-data-and-time-series) - [Images](#images) - [Tabular](#tabular) - [Models](#models) - [Integrations](#integrations) - [Citations](#citations) ## Installation and Usage The package, `alibi-detect` can be installed from: - PyPI or GitHub source (with `pip`) - Anaconda (with `conda`/`mamba`) ### With pip - alibi-detect can be installed from [PyPI](https://pypi.org/project/alibi-detect): ```bash pip install alibi-detect ``` - Alternatively, the development version can be installed: ```bash pip install git+https://github.com/SeldonIO/alibi-detect.git ``` - To install with the TensorFlow backend: ```bash pip install alibi-detect[tensorflow] ``` - To install with the PyTorch backend: ```bash pip install alibi-detect[torch] ``` - To install with the KeOps backend: ```bash pip install alibi-detect[keops] ``` - To use the `Prophet` time series outlier detector: ```bash pip install alibi-detect[prophet] ``` ### With conda To install from [conda-forge](https://conda-forge.org/) it is recommended to use [mamba](https://mamba.readthedocs.io/en/stable/), which can be installed to the *base* conda enviroment with: ```bash conda install mamba -n base -c conda-forge ``` To install alibi-detect: ```bash mamba install -c conda-forge alibi-detect ``` ### Usage We will use the [VAE outlier detector](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/vae.html) to illustrate the API. ```python from alibi_detect.od import OutlierVAE from alibi_detect.saving import save_detector, load_detector # initialize and fit detector od = OutlierVAE(threshold=0.1, encoder_net=encoder_net, decoder_net=decoder_net, latent_dim=1024) od.fit(x_train) # make predictions preds = od.predict(x_test) # save and load detectors filepath = './my_detector/' save_detector(od, filepath) od = load_detector(filepath) ``` The predictions are returned in a dictionary with as keys `meta` and `data`. `meta` contains the detector's metadata while `data` is in itself a dictionary with the actual predictions. It contains the outlier, adversarial or drift scores and thresholds as well as the predictions whether instances are e.g. outliers or not. The exact details can vary slightly from method to method, so we encourage the reader to become familiar with the [types of algorithms supported](https://docs.seldon.io/projects/alibi-detect/en/stable/overview/algorithms.html). ## Supported Algorithms The following tables show the advised use cases for each algorithm. The column *Feature Level* indicates whether the detection can be done at the feature level, e.g. per pixel for an image. Check the [algorithm reference list](#reference-list) for more information with links to the documentation and original papers as well as examples for each of the detectors. ### Outlier Detection | Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level | |:---------------------|:-------:|:-----:|:-----------:|:----:|:--------------------:|:------:|:-------------:| | Isolation Forest | ✔ | | | | ✔ | | | | Mahalanobis Distance | ✔ | | | | ✔ | ✔ | | | AE | ✔ | ✔ | | | | | ✔ | | VAE | ✔ | ✔ | | | | | ✔ | | AEGMM | ✔ | ✔ | | | | | | | VAEGMM | ✔ | ✔ | | | | | | | Likelihood Ratios | ✔ | ✔ | ✔ | | ✔ | | ✔ | | Prophet | | | ✔ | | | | | | Spectral Residual | | | ✔ | | | ✔ | ✔ | | Seq2Seq | | | ✔ | | | | ✔ | ### Adversarial Detection | Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level | | :--- | :---: | :---: |:-----------:|:----:|:--------------------:|:------:|:-------------:| | Adversarial AE | ✔ | ✔ | | | | | | | Model distillation | ✔ | ✔ | ✔ | ✔ | ✔ | | | ### Drift Detection | Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level | |:---------------------------------| :---: | :---: | :---: | :---: | :---: | :---: | :---: | | Kolmogorov-Smirnov | ✔ | ✔ | | ✔ | ✔ | | ✔ | | Cramér-von Mises | ✔ | ✔ | | | | ✔ | ✔ | | Fisher's Exact Test | ✔ | | | | ✔ | ✔ | ✔ | | Maximum Mean Discrepancy (MMD) | ✔ | ✔ | | ✔ | ✔ | ✔ | | | Learned Kernel MMD | ✔ | ✔ | | ✔ | ✔ | | | | Context-aware MMD | ✔ | ✔ | ✔ | ✔ | ✔ | | | | Least-Squares Density Difference | ✔ | ✔ | | ✔ | ✔ | ✔ | | | Chi-Squared | ✔ | | | | ✔ | | ✔ | | Mixed-type tabular data | ✔ | | | | ✔ | | ✔ | | Classifier | ✔ | ✔ | ✔ | ✔ | ✔ | | | | Spot-the-diff | ✔ | ✔ | ✔ | ✔ | ✔ | | ✔ | | Classifier Uncertainty | ✔ | ✔ | ✔ | ✔ | ✔ | | | | Regressor Uncertainty | ✔ | ✔ | ✔ | ✔ | ✔ | | | #### TensorFlow and PyTorch support The drift detectors support TensorFlow, PyTorch and (where applicable) [KeOps](https://www.kernel-operations.io/keops/index.html) backends. However, Alibi Detect does not install these by default. See the [installation options](#installation-and-usage) for more details. ```python from alibi_detect.cd import MMDDrift cd = MMDDrift(x_ref, backend='tensorflow', p_val=.05) preds = cd.predict(x) ``` The same detector in PyTorch: ```python cd = MMDDrift(x_ref, backend='pytorch', p_val=.05) preds = cd.predict(x) ``` Or in KeOps: ```python cd = MMDDrift(x_ref, backend='keops', p_val=.05) preds = cd.predict(x) ``` #### Built-in preprocessing steps Alibi Detect also comes with various preprocessing steps such as randomly initialized encoders, pretrained text embeddings to detect drift on using the [transformers](https://github.com/huggingface/transformers) library and extraction of hidden layers from machine learning models. This allows to detect different types of drift such as **covariate and predicted distribution shift**. The preprocessing steps are again supported in TensorFlow and PyTorch. ```python from alibi_detect.cd.tensorflow import HiddenOutput, preprocess_drift model = # TensorFlow model; tf.keras.Model or tf.keras.Sequential preprocess_fn = partial(preprocess_drift, model=HiddenOutput(model, layer=-1), batch_size=128) cd = MMDDrift(x_ref, backend='tensorflow', p_val=.05, preprocess_fn=preprocess_fn) preds = cd.predict(x) ``` Check the example notebooks (e.g. [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_mmd_cifar10.html), [movie reviews](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_text_imdb.html)) for more details. ### Reference List #### Outlier Detection - [Isolation Forest](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/iforest.html) ([FT Liu et al., 2008](https://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/icdm08b.pdf)) - Example: [Network Intrusion](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_if_kddcup.html) - [Mahalanobis Distance](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/mahalanobis.html) ([Mahalanobis, 1936](https://insa.nic.in/writereaddata/UpLoadedFiles/PINSA/Vol02_1936_1_Art05.pdf)) - Example: [Network Intrusion](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_mahalanobis_kddcup.html) - [Auto-Encoder (AE)](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/ae.html) - Example: [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_ae_cifar10.html) - [Variational Auto-Encoder (VAE)](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/vae.html) ([Kingma et al., 2013](https://arxiv.org/abs/1312.6114)) - Examples: [Network Intrusion](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_vae_kddcup.html), [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_vae_cifar10.html) - [Auto-Encoding Gaussian Mixture Model (AEGMM)](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/aegmm.html) ([Zong et al., 2018](https://openreview.net/forum?id=BJJLHbb0-)) - Example: [Network Intrusion](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_aegmm_kddcup.html) - [Variational Auto-Encoding Gaussian Mixture Model (VAEGMM)](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/vaegmm.html) - Example: [Network Intrusion](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_aegmm_kddcup.html) - [Likelihood Ratios](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/llr.html) ([Ren et al., 2019](https://arxiv.org/abs/1906.02845)) - Examples: [Genome](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_llr_genome.html), [Fashion-MNIST vs. MNIST](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_llr_mnist.html) - [Prophet Time Series Outlier Detector](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/prophet.html) ([Taylor et al., 2018](https://peerj.com/preprints/3190/)) - Example: [Weather Forecast](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_prophet_weather.html) - [Spectral Residual Time Series Outlier Detector](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/sr.html) ([Ren et al., 2019](https://arxiv.org/abs/1906.03821)) - Example: [Synthetic Dataset](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_sr_synth.html) - [Sequence-to-Sequence (Seq2Seq) Outlier Detector](https://docs.seldon.io/projects/alibi-detect/en/stable/od/methods/seq2seq.html) ([Sutskever et al., 2014](https://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf); [Park et al., 2017](https://arxiv.org/pdf/1711.00614.pdf)) - Examples: [ECG](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_seq2seq_ecg.html), [Synthetic Dataset](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/od_seq2seq_synth.html) #### Adversarial Detection - [Adversarial Auto-Encoder](https://docs.seldon.io/projects/alibi-detect/en/stable/ad/methods/adversarialae.html) ([Vacanti and Van Looveren, 2020](https://arxiv.org/abs/2002.09364)) - Example: [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/ad_ae_cifar10.html) - [Model distillation](https://docs.seldon.io/projects/alibi-detect/en/stable/ad/methods/modeldistillation.html) - Example: [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_distillation_cifar10.html) #### Drift Detection - [Kolmogorov-Smirnov](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/ksdrift.html) - Example: [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_ks_cifar10.html), [molecular graphs](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_mol.html), [movie reviews](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_text_imdb.html) - [Cramér-von Mises](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/cvmdrift.html) - Example: [Penguins](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_supervised_penguins.html) - [Fisher's Exact Test](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/fetdrift.html) - Example: [Penguins](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_supervised_penguins.html) - [Least-Squares Density Difference](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/lsdddrift.html) ([Bu et al, 2016](https://alippi.faculty.polimi.it/articoli/A%20Pdf%20free%20Change%20Detection%20Test%20Based%20on%20Density%20Difference%20Estimation.pdf)) - [Maximum Mean Discrepancy](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/mmddrift.html) ([Gretton et al, 2012](http://jmlr.csail.mit.edu/papers/v13/gretton12a.html)) - Example: [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_mmd_cifar10.html), [molecular graphs](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_mol.html), [movie reviews](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_text_imdb.html), [Amazon reviews](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_text_amazon.html) - [Learned Kernel MMD](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/learnedkerneldrift.html) ([Liu et al, 2020](https://arxiv.org/abs/2002.09116)) - Example: [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_clf_cifar10.html) - [Context-aware MMD](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/contextmmddrift.html) ([Cobb and Van Looveren, 2022](https://arxiv.org/abs/2203.08644)) - Example: [ECG](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_context_ecg.html), [news topics](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_context_20newsgroup.html) - [Chi-Squared](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/chisquaredrift.html) - Example: [Income Prediction](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_chi2ks_adult.html) - [Mixed-type tabular data](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/tabulardrift.html) - Example: [Income Prediction](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_chi2ks_adult.html) - [Classifier](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/classifierdrift.html) ([Lopez-Paz and Oquab, 2017](https://openreview.net/forum?id=SJkXfE5xx)) - Example: [CIFAR10](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_clf_cifar10.html), [Amazon reviews](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_text_amazon.html) - [Spot-the-diff](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/spotthediffdrift.html) (adaptation of [Jitkrittum et al, 2016](https://arxiv.org/abs/1605.06796)) - Example [MNIST and Wine quality](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/spot_the_diff_mnist_win.html) - [Classifier and Regressor Uncertainty](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/modeluncdrift.html) - Example: [CIFAR10 and Wine](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_model_unc_cifar10_wine.html), [molecular graphs](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_mol.html) - [Online Maximum Mean Discrepancy](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/onlinemmddrift.html) - Example: [Wine Quality](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_online_wine.html), [Camelyon medical imaging](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_online_camelyon.html) - [Online Least-Squares Density Difference](https://docs.seldon.io/projects/alibi-detect/en/stable/cd/methods/onlinemmddrift.html) ([Bu et al, 2017](https://ieeexplore.ieee.org/abstract/document/7890493)) - Example: [Wine Quality](https://docs.seldon.io/projects/alibi-detect/en/stable/examples/cd_online_wine.html) ## Datasets The package also contains functionality in `alibi_detect.datasets` to easily fetch a number of datasets for different modalities. For each dataset either the data and labels or a *Bunch* object with the data, labels and optional metadata are returned. Example: ```python from alibi_detect.datasets import fetch_ecg (X_train, y_train), (X_test, y_test) = fetch_ecg(return_X_y=True) ``` ### Sequential Data and Time Series - **Genome Dataset**: `fetch_genome` - Bacteria genomics dataset for out-of-distribution detection, released as part of [Likelihood Ratios for Out-of-Distribution Detection](https://arxiv.org/abs/1906.02845). From the original *TL;DR*: *The dataset contains genomic sequences of 250 base pairs from 10 in-distribution bacteria classes for training, 60 OOD bacteria classes for validation, and another 60 different OOD bacteria classes for test*. There are respectively 1, 7 and again 7 million sequences in the training, validation and test sets. For detailed info on the dataset check the [README](https://storage.cloud.google.com/seldon-datasets/genome/readme.docx?organizationId=156002945562). ```python from alibi_detect.datasets import fetch_genome (X_train, y_train), (X_val, y_val), (X_test, y_test) = fetch_genome(return_X_y=True) ``` - **ECG 5000**: `fetch_ecg` - 5000 ECG's, originally obtained from [Physionet](https://archive.physionet.org/cgi-bin/atm/ATM). - **NAB**: `fetch_nab` - Any univariate time series in a DataFrame from the [Numenta Anomaly Benchmark](https://github.com/numenta/NAB). A list with the available time series can be retrieved using `alibi_detect.datasets.get_list_nab()`. ### Images - **CIFAR-10-C**: `fetch_cifar10c` - CIFAR-10-C ([Hendrycks & Dietterich, 2019](https://arxiv.org/abs/1903.12261)) contains the test set of CIFAR-10, but corrupted and perturbed by various types of noise, blur, brightness etc. at different levels of severity, leading to a gradual decline in a classification model's performance trained on CIFAR-10. `fetch_cifar10c` allows you to pick any severity level or corruption type. The list with available corruption types can be retrieved with `alibi_detect.datasets.corruption_types_cifar10c()`. The dataset can be used in research on robustness and drift. The original data can be found [here](https://zenodo.org/record/2535967#.XnAM2nX7RNw). Example: ```python from alibi_detect.datasets import fetch_cifar10c corruption = ['gaussian_noise', 'motion_blur', 'brightness', 'pixelate'] X, y = fetch_cifar10c(corruption=corruption, severity=5, return_X_y=True) ``` - **Adversarial CIFAR-10**: `fetch_attack` - Load adversarial instances on a ResNet-56 classifier trained on CIFAR-10. Available attacks: [Carlini-Wagner](https://arxiv.org/abs/1608.04644) ('cw') and [SLIDE](https://arxiv.org/abs/1904.13000) ('slide'). Example: ```python from alibi_detect.datasets import fetch_attack (X_train, y_train), (X_test, y_test) = fetch_attack('cifar10', 'resnet56', 'cw', return_X_y=True) ``` ### Tabular - **KDD Cup '99**: `fetch_kdd` - Dataset with different types of computer network intrusions. `fetch_kdd` allows you to select a subset of network intrusions as targets or pick only specified features. The original data can be found [here](http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html). ## Models Models and/or building blocks that can be useful outside of outlier, adversarial or drift detection can be found under `alibi_detect.models`. Main implementations: - [PixelCNN++](https://arxiv.org/abs/1701.05517): `alibi_detect.models.pixelcnn.PixelCNN` - Variational Autoencoder: `alibi_detect.models.autoencoder.VAE` - Sequence-to-sequence model: `alibi_detect.models.autoencoder.Seq2Seq` - ResNet: `alibi_detect.models.resnet` - Pre-trained ResNet-20/32/44 models on CIFAR-10 can be found on our [Google Cloud Bucket](https://console.cloud.google.com/storage/browser/seldon-models/alibi-detect/classifier/cifar10/?organizationId=156002945562&project=seldon-pub) and can be fetched as follows: ```python from alibi_detect.utils.fetching import fetch_tf_model model = fetch_tf_model('cifar10', 'resnet32') ``` ## Integrations Alibi-detect is integrated in the open source machine learning model deployment platform [Seldon Core](https://docs.seldon.io/projects/seldon-core/en/stable/index.html) and model serving framework [KFServing](https://github.com/kubeflow/kfserving). - **Seldon Core**: [outlier](https://docs.seldon.io/projects/seldon-core/en/stable/analytics/outlier_detection.html) and [drift](https://docs.seldon.io/projects/seldon-core/en/stable/analytics/drift_detection.html) detection worked examples. - **KFServing**: [outlier](https://github.com/kubeflow/kfserving/tree/master/docs/samples/outlier-detection/alibi-detect/cifar10) and [drift](https://github.com/kubeflow/kfserving/tree/master/docs/samples/drift-detection/alibi-detect/cifar10) detection examples. ## Citations If you use alibi-detect in your research, please consider citing it. BibTeX entry: ``` @software{alibi-detect, title = {Alibi Detect: Algorithms for outlier, adversarial and drift detection}, author = {Van Looveren, Arnaud and Klaise, Janis and Vacanti, Giovanni and Cobb, Oliver and Scillitoe, Ashley and Samoilescu, Robert and Athorne, Alex}, url = {https://github.com/SeldonIO/alibi-detect}, version = {0.11.1}, date = {2023-03-03}, year = {2019} } ``` %prep %autosetup -n alibi-detect-0.11.1 %build %py3_build %install %py3_install install -d -m755 %{buildroot}/%{_pkgdocdir} if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi pushd %{buildroot} if [ -d usr/lib ]; then find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/lib64 ]; then find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/bin ]; then find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/sbin ]; then find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst fi touch doclist.lst if [ -d usr/share/man ]; then find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst fi popd mv %{buildroot}/filelist.lst . mv %{buildroot}/doclist.lst . %files -n python3-alibi-detect -f filelist.lst %dir %{python3_sitelib}/* %files help -f doclist.lst %{_docdir}/* %changelog * Tue Apr 11 2023 Python_Bot - 0.11.1-1 - Package Spec generated