From b37aa26f2169a66a8a883b2c7689d40420034e59 Mon Sep 17 00:00:00 2001 From: CoprDistGit Date: Tue, 11 Apr 2023 20:17:06 +0000 Subject: automatic import of python-shapash --- .gitignore | 1 + python-shapash.spec | 1016 +++++++++++++++++++++++++++++++++++++++++++++++++++ sources | 1 + 3 files changed, 1018 insertions(+) create mode 100644 python-shapash.spec create mode 100644 sources diff --git a/.gitignore b/.gitignore index e69de29..b9e8014 100644 --- a/.gitignore +++ b/.gitignore @@ -0,0 +1 @@ +/shapash-2.3.0.tar.gz diff --git a/python-shapash.spec b/python-shapash.spec new file mode 100644 index 0000000..0f9637f --- /dev/null +++ b/python-shapash.spec @@ -0,0 +1,1016 @@ +%global _empty_manifest_terminate_build 0 +Name: python-shapash +Version: 2.3.0 +Release: 1 +Summary: Shapash is a Python library which aims to make machine learning interpretable and understandable by everyone. +License: Apache Software License 2.0 +URL: https://github.com/MAIF/shapash +Source0: https://mirrors.nju.edu.cn/pypi/web/packages/d0/32/7258852e772286c39e9f610833f869a5a1acafe457fffff834a9048dea12/shapash-2.3.0.tar.gz +BuildArch: noarch + +Requires: python3-plotly +Requires: python3-matplotlib +Requires: python3-numpy +Requires: python3-pandas +Requires: python3-shap +Requires: python3-dash +Requires: python3-dash-bootstrap-components +Requires: python3-dash-core-components +Requires: python3-dash-daq +Requires: python3-dash-html-components +Requires: python3-dash-renderer +Requires: python3-dash-table +Requires: python3-nbformat +Requires: python3-numba +Requires: python3-scikit-learn +Requires: python3-category-encoders +Requires: python3-scipy +Requires: python3-acv-exp +Requires: python3-catboost +Requires: python3-lightgbm +Requires: python3-lime +Requires: python3-nbconvert +Requires: python3-papermill +Requires: python3-jupyter-client +Requires: python3-seaborn +Requires: python3-notebook +Requires: python3-Jinja2 +Requires: python3-phik +Requires: python3-xgboost + +%description +

+ +

+ + +

+ + + tests + + + + pypi + + + + downloads + + + + pyversion + + + + license + + + + doc + +

+ +## ๐ŸŽ‰ What's new ? + + +| Version | New Feature | Description | Tutorial | +|:-------------:|:-------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------:|:--------:| +| 2.3.x | Additional dataset columns
(Demo coming soon) | In Webapp: Target and error columns added to dataset and possibility to add features outside the model for more filtering options | [](https://github.com/MAIF/shapash/blob/master/tutorial/tutorial01-Shapash-Overview-Launch-WebApp.ipynb) +| 2.3.x | Identity card
(Demo coming soon) | In Webapp: New identity card to summarize the information of the selected sample | [](https://github.com/MAIF/shapash/blob/master/tutorial/tutorial01-Shapash-Overview-Launch-WebApp.ipynb) +| 2.2.x | Picking samples
[New demo](https://shapash-demo.ossbymaif.fr/) | New tab in the webapp for picking samples. The graph represents the "True Values Vs Predicted Values" | [](https://github.com/MAIF/shapash/blob/master/tutorial/plot/tuto-plot06-prediction_plot.ipynb) +| 2.2.x | Dataset Filter
[New demo](https://shapash-demo.ossbymaif.fr/) | New tab in the webapp to filter data. And several improvements in the webapp: subtitles, labels, screen adjustments | [](https://github.com/MAIF/shapash/blob/master/tutorial/tutorial01-Shapash-Overview-Launch-WebApp.ipynb) +| 2.0.x | Refactoring Shapash
| Refactoring attributes of compile methods and init. Refactoring implementation for new backends | [](https://github.com/MAIF/shapash/blob/master/tutorial/backend/tuto-backend-01.ipynb) +| 1.7.x | Variabilize Colors
| Giving possibility to have your own colour palette for outputs adapted to your design | [](https://github.com/MAIF/shapash/blob/master/tutorial/common/tuto-common02-colors.ipynb) +| 1.6.x | Explainability Quality Metrics
[article](https://towardsdatascience.com/building-confidence-on-explainability-methods-66b9ee575514) | To help increase confidence in explainability methods, you can evaluate the relevance of your explainability using 3 metrics: **Stability**, **Consistency** and **Compacity** | [](https://github.com/MAIF/shapash/blob/master/tutorial/explainability_quality/tuto-quality01-Builing-confidence-explainability.ipynb) +| 1.5.x | ACV Backend
| A new way of estimating Shapley values using ACV. [More info about ACV here](https://towardsdatascience.com/the-right-way-to-compute-your-shapley-values-cfea30509254). | [](tutorial/explainer/tuto-expl03-Shapash-acv-backend.ipynb) | +| 1.4.x | Groups of features
[demo](https://shapash-demo2.ossbymaif.fr/) | You can now regroup features that share common properties together.
This option can be useful if your model has a lot of features. | [](https://github.com/MAIF/shapash/blob/master/tutorial/common/tuto-common01-groups_of_features.ipynb) | +| 1.3.x | Shapash Report
[demo](https://shapash.readthedocs.io/en/latest/report.html) | A standalone HTML report that constitutes a basis of an audit document. | [](https://github.com/MAIF/shapash/blob/master/tutorial/report/tuto-shapash-report01.ipynb) | + + +## ๐Ÿ” Overview + +**Shapash** is a Python library which aims to make machine learning interpretable and understandable by everyone. +It provides several types of visualization that display explicit labels that everyone can understand. + +Data Scientists can understand their models easily and share their results. End users can understand the decision proposed by a model using a summary of the most influential criteria. + +Shapash also contributes to data science auditing by displaying usefull information about any model and data in a unique report. + +- Readthedocs: [![documentation badge](https://readthedocs.org/projects/shapash/badge/?version=latest)](https://shapash.readthedocs.io/en/latest/) +- [Presentation video for french speakers](https://www.youtube.com/watch?v=r1R_A9B9apk) +- Medium: + - [Understand your model with Shapash - Towards AI](https://pub.towardsai.net/shapash-making-ml-models-understandable-by-everyone-8f96ad469eb3) + - [Model auditability - Towards DS](https://towardsdatascience.com/shapash-1-3-2-announcing-new-features-for-more-auditable-ai-64a6db71c919) + - [Group of features - Towards AI](https://pub.towardsai.net/machine-learning-6011d5d9a444) + - [Building confidence on explainability - Towards DS](https://towardsdatascience.com/building-confidence-on-explainability-methods-66b9ee575514) + - [Picking Examples to Understand Machine Learning Model](https://www.kdnuggets.com/2022/11/picking-examples-understand-machine-learning-model.html) + + +

+ +

+ +## ๐Ÿค Contributors + +
+
+ + + + + +
+
+ + +## ๐Ÿ† Awards + + + + + + + + + + +## ๐Ÿ”ฅ Features + +- Display clear and understandable results: plots and outputs use **explicit labels** for each feature and its values + +

+ + + +

+ +

+ + + +

+ +

+ + + +

+ + +- Allow Data Scientists to quickly understand their models by using a **webapp** to easily navigate between global and local explainability, and understand how the different features contribute: [Live Demo Shapash-Monitor](https://shapash-demo.ossbymaif.fr/) + +- **Summarize and export** the local explanation +> **Shapash** proposes a short and clear local explanation. It allows each user, whatever their Data background, to understand a local prediction of a supervised model thanks to a summarized and explicit explanation + + +- **Evaluate** the quality of your explainability using different metrics + +- Easily share and discuss results with non-Data users + +- Select subsets for further analysis of explainability by filtering on explanatory and additional features, correct or wrong predictions. [Picking Examples to Understand Machine Learning Model](https://www.kdnuggets.com/2022/11/picking-examples-understand-machine-learning-model.html) + +- Deploy interpretability part of your project: From model training to deployment (API or Batch Mode) + +- Contribute to the **auditability of your model** by generating a **standalone HTML report** of your projects. [Report Example](https://shapash.readthedocs.io/en/latest/report.html) +>We hope that this report will bring a valuable support to auditing models and data related to a better AI governance. +Data Scientists can now deliver to anyone who is interested in their project **a document that freezes different aspects of their work as a basis of an audit report**. +This document can be easily shared across teams (internal audit, DPO, risk, compliance...). + +

+ +

+ +## โš™๏ธ How Shapash works +**Shapash** is an overlay package for libraries dedicated to the interpretability of models. It uses Shap or Lime backend +to compute contributions. +**Shapash** builds on the different steps necessary to build a machine learning model to make the results understandable + +

+ +

+ +**Shapash** works for Regression, Binary Classification or Multiclass problem.
+It is compatible with many models: *Catboost*, *Xgboost*, *LightGBM*, *Sklearn Ensemble*, *Linear models*, *SVM*.
+Shapash can use category-encoders object, sklearn ColumnTransformer or simply features dictionary.
+- Category_encoder: *OneHotEncoder*, *OrdinalEncoder*, *BaseNEncoder*, *BinaryEncoder*, *TargetEncoder* +- Sklearn ColumnTransformer: *OneHotEncoder*, *OrdinalEncoder*, *StandardScaler*, *QuantileTransformer*, *PowerTransformer* + +## ๐Ÿ›  Installation + +Shapash is intended to work with Python versions 3.8 to 3.10. Installation can be done with pip: + +``` +pip install shapash +``` + +In order to generate the Shapash Report some extra requirements are needed. +You can install these using the following command : +``` +pip install shapash[report] +``` + +If you encounter **compatibility issues** you may check the corresponding section in the Shapash documentation [here](https://shapash.readthedocs.io/en/latest/installation-instructions/index.html). + +## ๐Ÿ• Quickstart + +The 4 steps to display results: + +- Step 1: Declare SmartExplainer Object + > There 1 mandatory parameter in compile method: Model + > You can declare features dict here to specify the labels to display + +``` +from shapash import SmartExplainer +xpl = SmartExplainer( + model=regressor, + features_dict=house_dict, # Optional parameter + preprocessing=encoder, # Optional: compile step can use inverse_transform method + postprocessing=postprocess, # Optional: see tutorial postprocessing +) +``` + +- Step 2: Compile Dataset, ... + > There 1 mandatory parameter in compile method: Dataset + +``` +xpl.compile( + x=Xtest, + y_pred=y_pred, # Optional: for your own prediction (by default: model.predict) + y_target=yTest, # Optional: allows to display True Values vs Predicted Values + additional_data=X_additional, # Optional: additional dataset of features for Webapp + additional_features_dict=features_dict_additional, # Optional: dict additional data +) +``` + +- Step 3: Display output + > There are several outputs and plots available. for example, you can launch the web app: + +``` +app = xpl.run_app() +``` + +[Live Demo Shapash-Monitor](https://shapash-demo.ossbymaif.fr/) + +- Step 4: Generate the Shapash Report + > This step allows to generate a standalone html report of your project using the different splits + of your dataset and also the metrics you used: + +``` +xpl.generate_report( + output_file='path/to/output/report.html', + project_info_file='path/to/project_info.yml', + x_train=Xtrain, + y_train=ytrain, + y_test=ytest, + title_story="House prices report", + title_description="""This document is a data science report of the kaggle house prices tutorial project. + It was generated using the Shapash library.""", + metrics=[{โ€˜nameโ€™: โ€˜MSEโ€™, โ€˜pathโ€™: โ€˜sklearn.metrics.mean_squared_errorโ€™}] +) +``` + +[Report Example](https://shapash.readthedocs.io/en/latest/report.html) + +- Step 5: From training to deployment : SmartPredictor Object + > Shapash provides a SmartPredictor object to deploy the summary of local explanation for the operational needs. + It is an object dedicated to deployment, lighter than SmartExplainer with additional consistency checks. + SmartPredictor can be used with an API or in batch mode. It provides predictions, detailed or summarized local + explainability using appropriate wording. + +``` +predictor = xpl.to_smartpredictor() +``` +See the tutorial part to know how to use the SmartPredictor object + +## ๐Ÿ“– Tutorials +This github repository offers many tutorials to allow you to easily get started with Shapash. + + +
Overview + +- [Launch the webapp with a concrete use case](tutorial/tutorial01-Shapash-Overview-Launch-WebApp.ipynb) +- [Jupyter Overviews - The main outputs and methods available with the SmartExplainer object](tutorial/tutorial02-Shapash-overview-in-Jupyter.ipynb) +- [Shapash in production: From model training to deployment (API or Batch Mode)](tutorial/tutorial03-Shapash-overview-model-in-production.ipynb) +- [Use groups of features](tutorial/common/tuto-common01-groups_of_features.ipynb) +- [Deploy local explainability in production with SmartPredictor](tutorial/predictor/tuto-smartpredictor-introduction-to-SmartPredictor.ipynb) + +
+ +
Charts and plots + +- [**Shapash** Features Importance](tutorial/plot/tuto-plot03-features-importance.ipynb) +- [Contribution plot to understand how one feature affects a prediction](tutorial/plot/tuto-plot02-contribution_plot.ipynb) +- [Summarize, display and export local contribution using filter and local_plot method](tutorial/plot/tuto-plot01-local_plot-and-to_pandas.ipynb) +- [Contributions Comparing plot to understand why predictions on several individuals are different](tutorial/plot/tuto-plot04-compare_plot.ipynb) +- [Visualize interactions between couple of variables](tutorial/plot/tuto-plot05-interactions-plot.ipynb) +- [Customize colors in Webapp, plots and report](tutorial/common/tuto-common02-colors.ipynb) + +
+ +
Different ways to use Encoders and Dictionaries + +- [Use Category_Encoder & inverse transformation](tutorial/encoder/tuto-encoder01-using-category_encoder.ipynb) +- [Use ColumnTransformers](tutorial/encoder/tuto-encoder02-using-columntransformer.ipynb) +- [Use Simple Python Dictionnaries](tutorial/encoder/tuto-encoder03-using-dict.ipynb) + +
+ +
Displaying data with postprocessing + +[Using postprocessing parameter in compile method](tutorial/postprocess/tuto-postprocess01.ipynb) + +
+ +
Using different backends + +- [Compute Shapley Contributions using **Shap**](tutorial/explainer/tuto-expl01-Shapash-Viz-using-Shap-contributions.ipynb) +- [Use **Lime** to compute local explanation, Summarize-it with **Shapash**](tutorial/explainer/tuto-expl02-Shapash-Viz-using-Lime-contributions.ipynb) +- [Use **ACV backend** to compute Active Shapley Values and SDP global importance](tutorial/explainer/tuto-expl03-Shapash-acv-backend.ipynb) +- [Compile faster Lime and consistency of contributions](tutorial/explainer/tuto-expl04-Shapash-compute-Lime-faster.ipynb) + +
+ +
Evaluating the quality of your explainability + +- [Building confidence on explainability methods using **Stability**, **Consistency** and **Compacity** metrics](tutorial/explainability_quality/tuto-quality01-Builing-confidence-explainability.ipynb) + +
+ +
Generate a report of your project + +- [Generate a standalone HTML report of your project with generate_report](tutorial/report/tuto-shapash-report01.ipynb) + +
+ + + + +%package -n python3-shapash +Summary: Shapash is a Python library which aims to make machine learning interpretable and understandable by everyone. +Provides: python-shapash +BuildRequires: python3-devel +BuildRequires: python3-setuptools +BuildRequires: python3-pip +%description -n python3-shapash +

+ +

+ + +

+ + + tests + + + + pypi + + + + downloads + + + + pyversion + + + + license + + + + doc + +

+ +## ๐ŸŽ‰ What's new ? + + +| Version | New Feature | Description | Tutorial | +|:-------------:|:-------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------:|:--------:| +| 2.3.x | Additional dataset columns
(Demo coming soon) | In Webapp: Target and error columns added to dataset and possibility to add features outside the model for more filtering options | [](https://github.com/MAIF/shapash/blob/master/tutorial/tutorial01-Shapash-Overview-Launch-WebApp.ipynb) +| 2.3.x | Identity card
(Demo coming soon) | In Webapp: New identity card to summarize the information of the selected sample | [](https://github.com/MAIF/shapash/blob/master/tutorial/tutorial01-Shapash-Overview-Launch-WebApp.ipynb) +| 2.2.x | Picking samples
[New demo](https://shapash-demo.ossbymaif.fr/) | New tab in the webapp for picking samples. The graph represents the "True Values Vs Predicted Values" | [](https://github.com/MAIF/shapash/blob/master/tutorial/plot/tuto-plot06-prediction_plot.ipynb) +| 2.2.x | Dataset Filter
[New demo](https://shapash-demo.ossbymaif.fr/) | New tab in the webapp to filter data. And several improvements in the webapp: subtitles, labels, screen adjustments | [](https://github.com/MAIF/shapash/blob/master/tutorial/tutorial01-Shapash-Overview-Launch-WebApp.ipynb) +| 2.0.x | Refactoring Shapash
| Refactoring attributes of compile methods and init. Refactoring implementation for new backends | [](https://github.com/MAIF/shapash/blob/master/tutorial/backend/tuto-backend-01.ipynb) +| 1.7.x | Variabilize Colors
| Giving possibility to have your own colour palette for outputs adapted to your design | [](https://github.com/MAIF/shapash/blob/master/tutorial/common/tuto-common02-colors.ipynb) +| 1.6.x | Explainability Quality Metrics
[article](https://towardsdatascience.com/building-confidence-on-explainability-methods-66b9ee575514) | To help increase confidence in explainability methods, you can evaluate the relevance of your explainability using 3 metrics: **Stability**, **Consistency** and **Compacity** | [](https://github.com/MAIF/shapash/blob/master/tutorial/explainability_quality/tuto-quality01-Builing-confidence-explainability.ipynb) +| 1.5.x | ACV Backend
| A new way of estimating Shapley values using ACV. [More info about ACV here](https://towardsdatascience.com/the-right-way-to-compute-your-shapley-values-cfea30509254). | [](tutorial/explainer/tuto-expl03-Shapash-acv-backend.ipynb) | +| 1.4.x | Groups of features
[demo](https://shapash-demo2.ossbymaif.fr/) | You can now regroup features that share common properties together.
This option can be useful if your model has a lot of features. | [](https://github.com/MAIF/shapash/blob/master/tutorial/common/tuto-common01-groups_of_features.ipynb) | +| 1.3.x | Shapash Report
[demo](https://shapash.readthedocs.io/en/latest/report.html) | A standalone HTML report that constitutes a basis of an audit document. | [](https://github.com/MAIF/shapash/blob/master/tutorial/report/tuto-shapash-report01.ipynb) | + + +## ๐Ÿ” Overview + +**Shapash** is a Python library which aims to make machine learning interpretable and understandable by everyone. +It provides several types of visualization that display explicit labels that everyone can understand. + +Data Scientists can understand their models easily and share their results. End users can understand the decision proposed by a model using a summary of the most influential criteria. + +Shapash also contributes to data science auditing by displaying usefull information about any model and data in a unique report. + +- Readthedocs: [![documentation badge](https://readthedocs.org/projects/shapash/badge/?version=latest)](https://shapash.readthedocs.io/en/latest/) +- [Presentation video for french speakers](https://www.youtube.com/watch?v=r1R_A9B9apk) +- Medium: + - [Understand your model with Shapash - Towards AI](https://pub.towardsai.net/shapash-making-ml-models-understandable-by-everyone-8f96ad469eb3) + - [Model auditability - Towards DS](https://towardsdatascience.com/shapash-1-3-2-announcing-new-features-for-more-auditable-ai-64a6db71c919) + - [Group of features - Towards AI](https://pub.towardsai.net/machine-learning-6011d5d9a444) + - [Building confidence on explainability - Towards DS](https://towardsdatascience.com/building-confidence-on-explainability-methods-66b9ee575514) + - [Picking Examples to Understand Machine Learning Model](https://www.kdnuggets.com/2022/11/picking-examples-understand-machine-learning-model.html) + + +

+ +

+ +## ๐Ÿค Contributors + +
+
+ + + + + +
+
+ + +## ๐Ÿ† Awards + + + + + + + + + + +## ๐Ÿ”ฅ Features + +- Display clear and understandable results: plots and outputs use **explicit labels** for each feature and its values + +

+ + + +

+ +

+ + + +

+ +

+ + + +

+ + +- Allow Data Scientists to quickly understand their models by using a **webapp** to easily navigate between global and local explainability, and understand how the different features contribute: [Live Demo Shapash-Monitor](https://shapash-demo.ossbymaif.fr/) + +- **Summarize and export** the local explanation +> **Shapash** proposes a short and clear local explanation. It allows each user, whatever their Data background, to understand a local prediction of a supervised model thanks to a summarized and explicit explanation + + +- **Evaluate** the quality of your explainability using different metrics + +- Easily share and discuss results with non-Data users + +- Select subsets for further analysis of explainability by filtering on explanatory and additional features, correct or wrong predictions. [Picking Examples to Understand Machine Learning Model](https://www.kdnuggets.com/2022/11/picking-examples-understand-machine-learning-model.html) + +- Deploy interpretability part of your project: From model training to deployment (API or Batch Mode) + +- Contribute to the **auditability of your model** by generating a **standalone HTML report** of your projects. [Report Example](https://shapash.readthedocs.io/en/latest/report.html) +>We hope that this report will bring a valuable support to auditing models and data related to a better AI governance. +Data Scientists can now deliver to anyone who is interested in their project **a document that freezes different aspects of their work as a basis of an audit report**. +This document can be easily shared across teams (internal audit, DPO, risk, compliance...). + +

+ +

+ +## โš™๏ธ How Shapash works +**Shapash** is an overlay package for libraries dedicated to the interpretability of models. It uses Shap or Lime backend +to compute contributions. +**Shapash** builds on the different steps necessary to build a machine learning model to make the results understandable + +

+ +

+ +**Shapash** works for Regression, Binary Classification or Multiclass problem.
+It is compatible with many models: *Catboost*, *Xgboost*, *LightGBM*, *Sklearn Ensemble*, *Linear models*, *SVM*.
+Shapash can use category-encoders object, sklearn ColumnTransformer or simply features dictionary.
+- Category_encoder: *OneHotEncoder*, *OrdinalEncoder*, *BaseNEncoder*, *BinaryEncoder*, *TargetEncoder* +- Sklearn ColumnTransformer: *OneHotEncoder*, *OrdinalEncoder*, *StandardScaler*, *QuantileTransformer*, *PowerTransformer* + +## ๐Ÿ›  Installation + +Shapash is intended to work with Python versions 3.8 to 3.10. Installation can be done with pip: + +``` +pip install shapash +``` + +In order to generate the Shapash Report some extra requirements are needed. +You can install these using the following command : +``` +pip install shapash[report] +``` + +If you encounter **compatibility issues** you may check the corresponding section in the Shapash documentation [here](https://shapash.readthedocs.io/en/latest/installation-instructions/index.html). + +## ๐Ÿ• Quickstart + +The 4 steps to display results: + +- Step 1: Declare SmartExplainer Object + > There 1 mandatory parameter in compile method: Model + > You can declare features dict here to specify the labels to display + +``` +from shapash import SmartExplainer +xpl = SmartExplainer( + model=regressor, + features_dict=house_dict, # Optional parameter + preprocessing=encoder, # Optional: compile step can use inverse_transform method + postprocessing=postprocess, # Optional: see tutorial postprocessing +) +``` + +- Step 2: Compile Dataset, ... + > There 1 mandatory parameter in compile method: Dataset + +``` +xpl.compile( + x=Xtest, + y_pred=y_pred, # Optional: for your own prediction (by default: model.predict) + y_target=yTest, # Optional: allows to display True Values vs Predicted Values + additional_data=X_additional, # Optional: additional dataset of features for Webapp + additional_features_dict=features_dict_additional, # Optional: dict additional data +) +``` + +- Step 3: Display output + > There are several outputs and plots available. for example, you can launch the web app: + +``` +app = xpl.run_app() +``` + +[Live Demo Shapash-Monitor](https://shapash-demo.ossbymaif.fr/) + +- Step 4: Generate the Shapash Report + > This step allows to generate a standalone html report of your project using the different splits + of your dataset and also the metrics you used: + +``` +xpl.generate_report( + output_file='path/to/output/report.html', + project_info_file='path/to/project_info.yml', + x_train=Xtrain, + y_train=ytrain, + y_test=ytest, + title_story="House prices report", + title_description="""This document is a data science report of the kaggle house prices tutorial project. + It was generated using the Shapash library.""", + metrics=[{โ€˜nameโ€™: โ€˜MSEโ€™, โ€˜pathโ€™: โ€˜sklearn.metrics.mean_squared_errorโ€™}] +) +``` + +[Report Example](https://shapash.readthedocs.io/en/latest/report.html) + +- Step 5: From training to deployment : SmartPredictor Object + > Shapash provides a SmartPredictor object to deploy the summary of local explanation for the operational needs. + It is an object dedicated to deployment, lighter than SmartExplainer with additional consistency checks. + SmartPredictor can be used with an API or in batch mode. It provides predictions, detailed or summarized local + explainability using appropriate wording. + +``` +predictor = xpl.to_smartpredictor() +``` +See the tutorial part to know how to use the SmartPredictor object + +## ๐Ÿ“– Tutorials +This github repository offers many tutorials to allow you to easily get started with Shapash. + + +
Overview + +- [Launch the webapp with a concrete use case](tutorial/tutorial01-Shapash-Overview-Launch-WebApp.ipynb) +- [Jupyter Overviews - The main outputs and methods available with the SmartExplainer object](tutorial/tutorial02-Shapash-overview-in-Jupyter.ipynb) +- [Shapash in production: From model training to deployment (API or Batch Mode)](tutorial/tutorial03-Shapash-overview-model-in-production.ipynb) +- [Use groups of features](tutorial/common/tuto-common01-groups_of_features.ipynb) +- [Deploy local explainability in production with SmartPredictor](tutorial/predictor/tuto-smartpredictor-introduction-to-SmartPredictor.ipynb) + +
+ +
Charts and plots + +- [**Shapash** Features Importance](tutorial/plot/tuto-plot03-features-importance.ipynb) +- [Contribution plot to understand how one feature affects a prediction](tutorial/plot/tuto-plot02-contribution_plot.ipynb) +- [Summarize, display and export local contribution using filter and local_plot method](tutorial/plot/tuto-plot01-local_plot-and-to_pandas.ipynb) +- [Contributions Comparing plot to understand why predictions on several individuals are different](tutorial/plot/tuto-plot04-compare_plot.ipynb) +- [Visualize interactions between couple of variables](tutorial/plot/tuto-plot05-interactions-plot.ipynb) +- [Customize colors in Webapp, plots and report](tutorial/common/tuto-common02-colors.ipynb) + +
+ +
Different ways to use Encoders and Dictionaries + +- [Use Category_Encoder & inverse transformation](tutorial/encoder/tuto-encoder01-using-category_encoder.ipynb) +- [Use ColumnTransformers](tutorial/encoder/tuto-encoder02-using-columntransformer.ipynb) +- [Use Simple Python Dictionnaries](tutorial/encoder/tuto-encoder03-using-dict.ipynb) + +
+ +
Displaying data with postprocessing + +[Using postprocessing parameter in compile method](tutorial/postprocess/tuto-postprocess01.ipynb) + +
+ +
Using different backends + +- [Compute Shapley Contributions using **Shap**](tutorial/explainer/tuto-expl01-Shapash-Viz-using-Shap-contributions.ipynb) +- [Use **Lime** to compute local explanation, Summarize-it with **Shapash**](tutorial/explainer/tuto-expl02-Shapash-Viz-using-Lime-contributions.ipynb) +- [Use **ACV backend** to compute Active Shapley Values and SDP global importance](tutorial/explainer/tuto-expl03-Shapash-acv-backend.ipynb) +- [Compile faster Lime and consistency of contributions](tutorial/explainer/tuto-expl04-Shapash-compute-Lime-faster.ipynb) + +
+ +
Evaluating the quality of your explainability + +- [Building confidence on explainability methods using **Stability**, **Consistency** and **Compacity** metrics](tutorial/explainability_quality/tuto-quality01-Builing-confidence-explainability.ipynb) + +
+ +
Generate a report of your project + +- [Generate a standalone HTML report of your project with generate_report](tutorial/report/tuto-shapash-report01.ipynb) + +
+ + + + +%package help +Summary: Development documents and examples for shapash +Provides: python3-shapash-doc +%description help +

+ +

+ + +

+ + + tests + + + + pypi + + + + downloads + + + + pyversion + + + + license + + + + doc + +

+ +## ๐ŸŽ‰ What's new ? + + +| Version | New Feature | Description | Tutorial | +|:-------------:|:-------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------:|:--------:| +| 2.3.x | Additional dataset columns
(Demo coming soon) | In Webapp: Target and error columns added to dataset and possibility to add features outside the model for more filtering options | [](https://github.com/MAIF/shapash/blob/master/tutorial/tutorial01-Shapash-Overview-Launch-WebApp.ipynb) +| 2.3.x | Identity card
(Demo coming soon) | In Webapp: New identity card to summarize the information of the selected sample | [](https://github.com/MAIF/shapash/blob/master/tutorial/tutorial01-Shapash-Overview-Launch-WebApp.ipynb) +| 2.2.x | Picking samples
[New demo](https://shapash-demo.ossbymaif.fr/) | New tab in the webapp for picking samples. The graph represents the "True Values Vs Predicted Values" | [](https://github.com/MAIF/shapash/blob/master/tutorial/plot/tuto-plot06-prediction_plot.ipynb) +| 2.2.x | Dataset Filter
[New demo](https://shapash-demo.ossbymaif.fr/) | New tab in the webapp to filter data. And several improvements in the webapp: subtitles, labels, screen adjustments | [](https://github.com/MAIF/shapash/blob/master/tutorial/tutorial01-Shapash-Overview-Launch-WebApp.ipynb) +| 2.0.x | Refactoring Shapash
| Refactoring attributes of compile methods and init. Refactoring implementation for new backends | [](https://github.com/MAIF/shapash/blob/master/tutorial/backend/tuto-backend-01.ipynb) +| 1.7.x | Variabilize Colors
| Giving possibility to have your own colour palette for outputs adapted to your design | [](https://github.com/MAIF/shapash/blob/master/tutorial/common/tuto-common02-colors.ipynb) +| 1.6.x | Explainability Quality Metrics
[article](https://towardsdatascience.com/building-confidence-on-explainability-methods-66b9ee575514) | To help increase confidence in explainability methods, you can evaluate the relevance of your explainability using 3 metrics: **Stability**, **Consistency** and **Compacity** | [](https://github.com/MAIF/shapash/blob/master/tutorial/explainability_quality/tuto-quality01-Builing-confidence-explainability.ipynb) +| 1.5.x | ACV Backend
| A new way of estimating Shapley values using ACV. [More info about ACV here](https://towardsdatascience.com/the-right-way-to-compute-your-shapley-values-cfea30509254). | [](tutorial/explainer/tuto-expl03-Shapash-acv-backend.ipynb) | +| 1.4.x | Groups of features
[demo](https://shapash-demo2.ossbymaif.fr/) | You can now regroup features that share common properties together.
This option can be useful if your model has a lot of features. | [](https://github.com/MAIF/shapash/blob/master/tutorial/common/tuto-common01-groups_of_features.ipynb) | +| 1.3.x | Shapash Report
[demo](https://shapash.readthedocs.io/en/latest/report.html) | A standalone HTML report that constitutes a basis of an audit document. | [](https://github.com/MAIF/shapash/blob/master/tutorial/report/tuto-shapash-report01.ipynb) | + + +## ๐Ÿ” Overview + +**Shapash** is a Python library which aims to make machine learning interpretable and understandable by everyone. +It provides several types of visualization that display explicit labels that everyone can understand. + +Data Scientists can understand their models easily and share their results. End users can understand the decision proposed by a model using a summary of the most influential criteria. + +Shapash also contributes to data science auditing by displaying usefull information about any model and data in a unique report. + +- Readthedocs: [![documentation badge](https://readthedocs.org/projects/shapash/badge/?version=latest)](https://shapash.readthedocs.io/en/latest/) +- [Presentation video for french speakers](https://www.youtube.com/watch?v=r1R_A9B9apk) +- Medium: + - [Understand your model with Shapash - Towards AI](https://pub.towardsai.net/shapash-making-ml-models-understandable-by-everyone-8f96ad469eb3) + - [Model auditability - Towards DS](https://towardsdatascience.com/shapash-1-3-2-announcing-new-features-for-more-auditable-ai-64a6db71c919) + - [Group of features - Towards AI](https://pub.towardsai.net/machine-learning-6011d5d9a444) + - [Building confidence on explainability - Towards DS](https://towardsdatascience.com/building-confidence-on-explainability-methods-66b9ee575514) + - [Picking Examples to Understand Machine Learning Model](https://www.kdnuggets.com/2022/11/picking-examples-understand-machine-learning-model.html) + + +

+ +

+ +## ๐Ÿค Contributors + +
+
+ + + + + +
+
+ + +## ๐Ÿ† Awards + + + + + + + + + + +## ๐Ÿ”ฅ Features + +- Display clear and understandable results: plots and outputs use **explicit labels** for each feature and its values + +

+ + + +

+ +

+ + + +

+ +

+ + + +

+ + +- Allow Data Scientists to quickly understand their models by using a **webapp** to easily navigate between global and local explainability, and understand how the different features contribute: [Live Demo Shapash-Monitor](https://shapash-demo.ossbymaif.fr/) + +- **Summarize and export** the local explanation +> **Shapash** proposes a short and clear local explanation. It allows each user, whatever their Data background, to understand a local prediction of a supervised model thanks to a summarized and explicit explanation + + +- **Evaluate** the quality of your explainability using different metrics + +- Easily share and discuss results with non-Data users + +- Select subsets for further analysis of explainability by filtering on explanatory and additional features, correct or wrong predictions. [Picking Examples to Understand Machine Learning Model](https://www.kdnuggets.com/2022/11/picking-examples-understand-machine-learning-model.html) + +- Deploy interpretability part of your project: From model training to deployment (API or Batch Mode) + +- Contribute to the **auditability of your model** by generating a **standalone HTML report** of your projects. [Report Example](https://shapash.readthedocs.io/en/latest/report.html) +>We hope that this report will bring a valuable support to auditing models and data related to a better AI governance. +Data Scientists can now deliver to anyone who is interested in their project **a document that freezes different aspects of their work as a basis of an audit report**. +This document can be easily shared across teams (internal audit, DPO, risk, compliance...). + +

+ +

+ +## โš™๏ธ How Shapash works +**Shapash** is an overlay package for libraries dedicated to the interpretability of models. It uses Shap or Lime backend +to compute contributions. +**Shapash** builds on the different steps necessary to build a machine learning model to make the results understandable + +

+ +

+ +**Shapash** works for Regression, Binary Classification or Multiclass problem.
+It is compatible with many models: *Catboost*, *Xgboost*, *LightGBM*, *Sklearn Ensemble*, *Linear models*, *SVM*.
+Shapash can use category-encoders object, sklearn ColumnTransformer or simply features dictionary.
+- Category_encoder: *OneHotEncoder*, *OrdinalEncoder*, *BaseNEncoder*, *BinaryEncoder*, *TargetEncoder* +- Sklearn ColumnTransformer: *OneHotEncoder*, *OrdinalEncoder*, *StandardScaler*, *QuantileTransformer*, *PowerTransformer* + +## ๐Ÿ›  Installation + +Shapash is intended to work with Python versions 3.8 to 3.10. Installation can be done with pip: + +``` +pip install shapash +``` + +In order to generate the Shapash Report some extra requirements are needed. +You can install these using the following command : +``` +pip install shapash[report] +``` + +If you encounter **compatibility issues** you may check the corresponding section in the Shapash documentation [here](https://shapash.readthedocs.io/en/latest/installation-instructions/index.html). + +## ๐Ÿ• Quickstart + +The 4 steps to display results: + +- Step 1: Declare SmartExplainer Object + > There 1 mandatory parameter in compile method: Model + > You can declare features dict here to specify the labels to display + +``` +from shapash import SmartExplainer +xpl = SmartExplainer( + model=regressor, + features_dict=house_dict, # Optional parameter + preprocessing=encoder, # Optional: compile step can use inverse_transform method + postprocessing=postprocess, # Optional: see tutorial postprocessing +) +``` + +- Step 2: Compile Dataset, ... + > There 1 mandatory parameter in compile method: Dataset + +``` +xpl.compile( + x=Xtest, + y_pred=y_pred, # Optional: for your own prediction (by default: model.predict) + y_target=yTest, # Optional: allows to display True Values vs Predicted Values + additional_data=X_additional, # Optional: additional dataset of features for Webapp + additional_features_dict=features_dict_additional, # Optional: dict additional data +) +``` + +- Step 3: Display output + > There are several outputs and plots available. for example, you can launch the web app: + +``` +app = xpl.run_app() +``` + +[Live Demo Shapash-Monitor](https://shapash-demo.ossbymaif.fr/) + +- Step 4: Generate the Shapash Report + > This step allows to generate a standalone html report of your project using the different splits + of your dataset and also the metrics you used: + +``` +xpl.generate_report( + output_file='path/to/output/report.html', + project_info_file='path/to/project_info.yml', + x_train=Xtrain, + y_train=ytrain, + y_test=ytest, + title_story="House prices report", + title_description="""This document is a data science report of the kaggle house prices tutorial project. + It was generated using the Shapash library.""", + metrics=[{โ€˜nameโ€™: โ€˜MSEโ€™, โ€˜pathโ€™: โ€˜sklearn.metrics.mean_squared_errorโ€™}] +) +``` + +[Report Example](https://shapash.readthedocs.io/en/latest/report.html) + +- Step 5: From training to deployment : SmartPredictor Object + > Shapash provides a SmartPredictor object to deploy the summary of local explanation for the operational needs. + It is an object dedicated to deployment, lighter than SmartExplainer with additional consistency checks. + SmartPredictor can be used with an API or in batch mode. It provides predictions, detailed or summarized local + explainability using appropriate wording. + +``` +predictor = xpl.to_smartpredictor() +``` +See the tutorial part to know how to use the SmartPredictor object + +## ๐Ÿ“– Tutorials +This github repository offers many tutorials to allow you to easily get started with Shapash. + + +
Overview + +- [Launch the webapp with a concrete use case](tutorial/tutorial01-Shapash-Overview-Launch-WebApp.ipynb) +- [Jupyter Overviews - The main outputs and methods available with the SmartExplainer object](tutorial/tutorial02-Shapash-overview-in-Jupyter.ipynb) +- [Shapash in production: From model training to deployment (API or Batch Mode)](tutorial/tutorial03-Shapash-overview-model-in-production.ipynb) +- [Use groups of features](tutorial/common/tuto-common01-groups_of_features.ipynb) +- [Deploy local explainability in production with SmartPredictor](tutorial/predictor/tuto-smartpredictor-introduction-to-SmartPredictor.ipynb) + +
+ +
Charts and plots + +- [**Shapash** Features Importance](tutorial/plot/tuto-plot03-features-importance.ipynb) +- [Contribution plot to understand how one feature affects a prediction](tutorial/plot/tuto-plot02-contribution_plot.ipynb) +- [Summarize, display and export local contribution using filter and local_plot method](tutorial/plot/tuto-plot01-local_plot-and-to_pandas.ipynb) +- [Contributions Comparing plot to understand why predictions on several individuals are different](tutorial/plot/tuto-plot04-compare_plot.ipynb) +- [Visualize interactions between couple of variables](tutorial/plot/tuto-plot05-interactions-plot.ipynb) +- [Customize colors in Webapp, plots and report](tutorial/common/tuto-common02-colors.ipynb) + +
+ +
Different ways to use Encoders and Dictionaries + +- [Use Category_Encoder & inverse transformation](tutorial/encoder/tuto-encoder01-using-category_encoder.ipynb) +- [Use ColumnTransformers](tutorial/encoder/tuto-encoder02-using-columntransformer.ipynb) +- [Use Simple Python Dictionnaries](tutorial/encoder/tuto-encoder03-using-dict.ipynb) + +
+ +
Displaying data with postprocessing + +[Using postprocessing parameter in compile method](tutorial/postprocess/tuto-postprocess01.ipynb) + +
+ +
Using different backends + +- [Compute Shapley Contributions using **Shap**](tutorial/explainer/tuto-expl01-Shapash-Viz-using-Shap-contributions.ipynb) +- [Use **Lime** to compute local explanation, Summarize-it with **Shapash**](tutorial/explainer/tuto-expl02-Shapash-Viz-using-Lime-contributions.ipynb) +- [Use **ACV backend** to compute Active Shapley Values and SDP global importance](tutorial/explainer/tuto-expl03-Shapash-acv-backend.ipynb) +- [Compile faster Lime and consistency of contributions](tutorial/explainer/tuto-expl04-Shapash-compute-Lime-faster.ipynb) + +
+ +
Evaluating the quality of your explainability + +- [Building confidence on explainability methods using **Stability**, **Consistency** and **Compacity** metrics](tutorial/explainability_quality/tuto-quality01-Builing-confidence-explainability.ipynb) + +
+ +
Generate a report of your project + +- [Generate a standalone HTML report of your project with generate_report](tutorial/report/tuto-shapash-report01.ipynb) + +
+ + + + +%prep +%autosetup -n shapash-2.3.0 + +%build +%py3_build + +%install +%py3_install +install -d -m755 %{buildroot}/%{_pkgdocdir} +if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi +if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi +if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi +if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi +pushd %{buildroot} +if [ -d usr/lib ]; then + find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/lib64 ]; then + find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/bin ]; then + find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/sbin ]; then + find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst +fi +touch doclist.lst +if [ -d usr/share/man ]; then + find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst +fi +popd +mv %{buildroot}/filelist.lst . +mv %{buildroot}/doclist.lst . + +%files -n python3-shapash -f filelist.lst +%dir %{python3_sitelib}/* + +%files help -f doclist.lst +%{_docdir}/* + +%changelog +* Tue Apr 11 2023 Python_Bot - 2.3.0-1 +- Package Spec generated diff --git a/sources b/sources new file mode 100644 index 0000000..0d901d8 --- /dev/null +++ b/sources @@ -0,0 +1 @@ +cc3ca82be59056d0bdd811bdb172478b shapash-2.3.0.tar.gz -- cgit v1.2.3