%global _empty_manifest_terminate_build 0
Name: python-dice-ml
Version: 0.9
Release: 1
Summary: Generate Diverse Counterfactual Explanations for any machine learning model.
License: MIT
URL: https://github.com/interpretml/DiCE
Source0: https://mirrors.nju.edu.cn/pypi/web/packages/64/68/50a1b8ed3e1a567b7870930245706a08493f483035625cc79cf48aac752a/dice_ml-0.9.tar.gz
BuildArch: noarch
Requires: python3-jsonschema
Requires: python3-numpy
Requires: python3-pandas
Requires: python3-scikit-learn
Requires: python3-h5py
Requires: python3-tqdm
Requires: python3-tensorflow
Requires: python3-torch
%description
*How to explain a machine learning model such that the explanation is truthful to the model and yet interpretable to people?*
`Ramaravind K. Mothilal `_, `Amit Sharma `_, `Chenhao Tan `_
`FAT* '20 paper `_ | `Docs `_ | `Example Notebooks `_ | Live Jupyter notebook |Binder|_
**Blog Post**: `Explanation for ML using diverse counterfactuals `_
**Case Studies**: `Towards Data Science `_ (Hotel Bookings) | `Analytics Vidhya `_ (Titanic Dataset)
Explanations are critical for machine learning, especially as machine learning-based systems are being used to inform decisions in societally critical domains such as finance, healthcare, education, and criminal justice.
However, most explanation methods depend on an approximation of the ML model to
create an interpretable explanation. For example,
consider a person who applied for a loan and was rejected by the loan distribution algorithm of a financial company. Typically, the company may provide an explanation on why the loan was rejected, for example, due to "poor credit history". However, such an explanation does not help the person decide *what they do should next* to improve their chances of being approved in the future. Critically, the most important feature may not be enough to flip the decision of the algorithm, and in practice, may not even be changeable such as gender and race.
DiCE implements `counterfactual (CF) explanations `_ that provide this information by showing feature-perturbed versions of the same person who would have received the loan, e.g., ``you would have received the loan if your income was higher by $10,000``. In other words, it provides "what-if" explanations for model output and can be a useful complement to other explanation methods, both for end-users and model developers.
Barring simple linear models, however, it is difficult to generate CF examples that work for any machine learning model. DiCE is based on `recent research `_ that generates CF explanations for any ML model. The core idea is to setup finding such explanations as an optimization problem, similar to finding adversarial examples. The critical difference is that for explanations, we need perturbations that change the output of a machine learning model, but are also diverse and feasible to change. Therefore, DiCE supports generating a set of counterfactual explanations and has tunable parameters for diversity and proximity of the explanations to the original input. It also supports simple constraints on features to ensure feasibility of the generated counterfactual examples.
%package -n python3-dice-ml
Summary: Generate Diverse Counterfactual Explanations for any machine learning model.
Provides: python-dice-ml
BuildRequires: python3-devel
BuildRequires: python3-setuptools
BuildRequires: python3-pip
%description -n python3-dice-ml
*How to explain a machine learning model such that the explanation is truthful to the model and yet interpretable to people?*
`Ramaravind K. Mothilal `_, `Amit Sharma `_, `Chenhao Tan `_
`FAT* '20 paper `_ | `Docs `_ | `Example Notebooks `_ | Live Jupyter notebook |Binder|_
**Blog Post**: `Explanation for ML using diverse counterfactuals `_
**Case Studies**: `Towards Data Science `_ (Hotel Bookings) | `Analytics Vidhya `_ (Titanic Dataset)
Explanations are critical for machine learning, especially as machine learning-based systems are being used to inform decisions in societally critical domains such as finance, healthcare, education, and criminal justice.
However, most explanation methods depend on an approximation of the ML model to
create an interpretable explanation. For example,
consider a person who applied for a loan and was rejected by the loan distribution algorithm of a financial company. Typically, the company may provide an explanation on why the loan was rejected, for example, due to "poor credit history". However, such an explanation does not help the person decide *what they do should next* to improve their chances of being approved in the future. Critically, the most important feature may not be enough to flip the decision of the algorithm, and in practice, may not even be changeable such as gender and race.
DiCE implements `counterfactual (CF) explanations `_ that provide this information by showing feature-perturbed versions of the same person who would have received the loan, e.g., ``you would have received the loan if your income was higher by $10,000``. In other words, it provides "what-if" explanations for model output and can be a useful complement to other explanation methods, both for end-users and model developers.
Barring simple linear models, however, it is difficult to generate CF examples that work for any machine learning model. DiCE is based on `recent research `_ that generates CF explanations for any ML model. The core idea is to setup finding such explanations as an optimization problem, similar to finding adversarial examples. The critical difference is that for explanations, we need perturbations that change the output of a machine learning model, but are also diverse and feasible to change. Therefore, DiCE supports generating a set of counterfactual explanations and has tunable parameters for diversity and proximity of the explanations to the original input. It also supports simple constraints on features to ensure feasibility of the generated counterfactual examples.
%package help
Summary: Development documents and examples for dice-ml
Provides: python3-dice-ml-doc
%description help
*How to explain a machine learning model such that the explanation is truthful to the model and yet interpretable to people?*
`Ramaravind K. Mothilal `_, `Amit Sharma `_, `Chenhao Tan `_
`FAT* '20 paper `_ | `Docs `_ | `Example Notebooks `_ | Live Jupyter notebook |Binder|_
**Blog Post**: `Explanation for ML using diverse counterfactuals `_
**Case Studies**: `Towards Data Science `_ (Hotel Bookings) | `Analytics Vidhya `_ (Titanic Dataset)
Explanations are critical for machine learning, especially as machine learning-based systems are being used to inform decisions in societally critical domains such as finance, healthcare, education, and criminal justice.
However, most explanation methods depend on an approximation of the ML model to
create an interpretable explanation. For example,
consider a person who applied for a loan and was rejected by the loan distribution algorithm of a financial company. Typically, the company may provide an explanation on why the loan was rejected, for example, due to "poor credit history". However, such an explanation does not help the person decide *what they do should next* to improve their chances of being approved in the future. Critically, the most important feature may not be enough to flip the decision of the algorithm, and in practice, may not even be changeable such as gender and race.
DiCE implements `counterfactual (CF) explanations `_ that provide this information by showing feature-perturbed versions of the same person who would have received the loan, e.g., ``you would have received the loan if your income was higher by $10,000``. In other words, it provides "what-if" explanations for model output and can be a useful complement to other explanation methods, both for end-users and model developers.
Barring simple linear models, however, it is difficult to generate CF examples that work for any machine learning model. DiCE is based on `recent research `_ that generates CF explanations for any ML model. The core idea is to setup finding such explanations as an optimization problem, similar to finding adversarial examples. The critical difference is that for explanations, we need perturbations that change the output of a machine learning model, but are also diverse and feasible to change. Therefore, DiCE supports generating a set of counterfactual explanations and has tunable parameters for diversity and proximity of the explanations to the original input. It also supports simple constraints on features to ensure feasibility of the generated counterfactual examples.
%prep
%autosetup -n dice-ml-0.9
%build
%py3_build
%install
%py3_install
install -d -m755 %{buildroot}/%{_pkgdocdir}
if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
pushd %{buildroot}
if [ -d usr/lib ]; then
find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/lib64 ]; then
find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/bin ]; then
find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/sbin ]; then
find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
fi
touch doclist.lst
if [ -d usr/share/man ]; then
find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
fi
popd
mv %{buildroot}/filelist.lst .
mv %{buildroot}/doclist.lst .
%files -n python3-dice-ml -f filelist.lst
%dir %{python3_sitelib}/*
%files help -f doclist.lst
%{_docdir}/*
%changelog
* Tue Apr 11 2023 Python_Bot - 0.9-1
- Package Spec generated