%global _empty_manifest_terminate_build 0 Name: python-pycausalimpact Version: 0.1.1 Release: 1 Summary: Python version of Google's Causal Impact model License: MIT URL: https://github.com/dafiti/causalimpact Source0: https://mirrors.nju.edu.cn/pypi/web/packages/e3/62/4b471c8ceb8f9a2115892bf80a438b7e2567a8a4fe0d9f95544a1fc53918/pycausalimpact-0.1.1.tar.gz BuildArch: noarch Requires: python3-numpy Requires: python3-scipy Requires: python3-statsmodels Requires: python3-matplotlib Requires: python3-jinja2 Requires: python3-ipython Requires: python3-jupyter Requires: python3-pytest Requires: python3-pytest-cov Requires: python3-mock Requires: python3-tox %description # Causal Impact [![Build Status](https://travis-ci.com/dafiti/causalimpact.svg?branch=master)](https://travis-ci.com/dafiti/causalimpact) [![Coverage Status](https://coveralls.io/repos/github/dafiti/causalimpact/badge.svg?branch=master)](https://coveralls.io/github/dafiti/causalimpact?branch=master) [![PyPI version](https://badge.fury.io/py/pycausalimpact.svg)](https://badge.fury.io/py/pycausalimpact) [![Pyversions](https://img.shields.io/pypi/pyversions/pycausalimpact.svg)](https://pypi.python.org/pypi/pycausalimpact) [![GitHub license](https://img.shields.io/github/license/dafiti/causalimpact.svg)](https://github.com/dafiti/causalimpact/blob/master/LICENSE) Python causal impact (or causal inference) implementation of [Google's](https://github.com/google/CausalImpact) model with all functionalities fully ported and tested. ## How it works The main goal of the algorithm is to infer the expected effect a given intervention (or any action) had on some response variable by analyzing differences between expected and observed time series data. Data is divided in two parts: the first one is what is known as the "pre-intervention" period and the concept of [Bayesian Structural Time Series](https://en.wikipedia.org/wiki/Bayesian_structural_time_series) is used to fit a model that best explains what has been observed. The fitted model is used in the second part of data ("post-intervention" period) to forecast what the response would look like had the intervention not taken place. The inferences are based on the differences between observed response to the predicted one which yields the absolute and relative expected effect the intervention caused on data. The model makes as assumption (which is recommended to be confirmed in your data) that the response variable can be precisely modeled by a linear regression with what is known as "covariates" (or `X`) that **must not** be affected by the intervention that took place (for instance, if a company wants to infer what impact a given marketing campaign will have on its "revenue", then its daily "visits" cannot be used as a covariate as probably the total visits might be affected by the campaign. It is more commonly used to infer the impact that marketing interventions have on businesses such as the expected revenue associated to a given campaign or even to assert more precisely the revenue a given channel brings in by completely turning it off (also known as "hold-out" tests). It's important to note though that the model can be extensively used in different areas and subjects; any intervention on time series data can potentially be modeled and inferences be made upon observed and predicted data. Please refer to getting started in the `examples` folder for more information. ## Installation pip install pycausalimpact ## Requirements - python{2.7, 3.6, 3.7, 3.8} \* - numpy - scipy - statsmodels - matplotlib - jinja2 \* **We no longer support Python2.7!** Please refer to the tag `0.0.16` (`pip install pycausalimpact==0.0.16`) for the latest available supported version. ## Getting Started We recommend this [presentation](https://www.youtube.com/watch?v=GTgZfCltMm8) by Kay Brodersen (one of the creators of the causal impact implementation in R). We also created this introductory [ipython notebook](http://nbviewer.jupyter.org/github/dafiti/causalimpact/blob/master/examples/getting_started.ipynb) with examples of how to use this package. ### Simple Example Here's a simple example (which can also be found in the original Google's R implementation) running in Python: ```python import numpy as np import pandas as pd from statsmodels.tsa.arima_process import ArmaProcess from causalimpact import CausalImpact np.random.seed(12345) ar = np.r_[1, 0.9] ma = np.array([1]) arma_process = ArmaProcess(ar, ma) X = 100 + arma_process.generate_sample(nsample=100) y = 1.2 * X + np.random.normal(size=100) y[70:] += 5 data = pd.DataFrame({'y': y, 'X': X}, columns=['y', 'X']) pre_period = [0, 69] post_period = [70, 99] ci = CausalImpact(data, pre_period, post_period) print(ci.summary()) print(ci.summary(output='report')) ci.plot() ``` ![alt text](https://raw.githubusercontent.com/dafiti/causalimpact/master/examples/ci_plot.png) ## Differences Between Python and R Packages One thing you'll notice when using this package is that sometimes results will converge to be similar to the R package output and at times it may yield different conclusions. This is a quite complex topic and we have discussed it more throroughly on the issues number [#34](https://github.com/dafiti/causalimpact/issues/34), [#37](https://github.com/dafiti/causalimpact/issues/37) and [#40](https://github.com/dafiti/causalimpact/issues/40) which we highly recommend the reading. In a nutshell, Python implementation relies on [statsmodels](https://github.com/statsmodels/statsmodels) which uses a classical Kalman Filter approach for solving the statespace equations whereas R\`s uses a Bayesian approach (from [bsts](https://github.com/cran/bsts) package) with a stochastic Kalman Filter technique; both algorithms are expected to converge to similar final statespace solution [(ref)](https://stackoverflow.com/questions/57300211/local-level-model-not-fully-optimizing-irregular-state/57316141?noredirect=1#comment101157526_57316141). Still, despite the similarities, both packages uses different assumptions for prior initalizations as well as for steps involved in the optimization process: while in R we find an approach that relies on user prior knowledge, Python uses classical statistical techniques aiming to maximize the likelihood function expressed in terms of the structural time series components. As we discuss in the previously mentioned issues, it's hard to tell which is right or "more right"; each package has its own assumptions and its own techniques making it up for the final user to decide what is appropriate or not. We recommend comparing results from both packages in your use cases to have a more general idea whether there's convergence in conclusions or not. As a final note, when using this Python package, **we highly recommend setting the prior as None** like so: ci = CausalImpact(data, pre_period, post_period, prior_level_sd=None) This will let statsmodel itself do the optimization for the prior on the local level component. If you are confident that your local level prior should be a given specific value (say `0.01`), then it's probably ok to use it there, otherwise you run the risk of obtaining sub-optimal solutions as a result. ## Contributing, Bugs, Questions Contributions are more than welcome! If you want to propose new changes, fix bugs or improve something feel free to fork the repository and send us a Pull Request. You can also open new [`Issues`](https://github.com/dafiti/causalimpact/issues) for reporting bugs and general problems. %package -n python3-pycausalimpact Summary: Python version of Google's Causal Impact model Provides: python-pycausalimpact BuildRequires: python3-devel BuildRequires: python3-setuptools BuildRequires: python3-pip %description -n python3-pycausalimpact # Causal Impact [![Build Status](https://travis-ci.com/dafiti/causalimpact.svg?branch=master)](https://travis-ci.com/dafiti/causalimpact) [![Coverage Status](https://coveralls.io/repos/github/dafiti/causalimpact/badge.svg?branch=master)](https://coveralls.io/github/dafiti/causalimpact?branch=master) [![PyPI version](https://badge.fury.io/py/pycausalimpact.svg)](https://badge.fury.io/py/pycausalimpact) [![Pyversions](https://img.shields.io/pypi/pyversions/pycausalimpact.svg)](https://pypi.python.org/pypi/pycausalimpact) [![GitHub license](https://img.shields.io/github/license/dafiti/causalimpact.svg)](https://github.com/dafiti/causalimpact/blob/master/LICENSE) Python causal impact (or causal inference) implementation of [Google's](https://github.com/google/CausalImpact) model with all functionalities fully ported and tested. ## How it works The main goal of the algorithm is to infer the expected effect a given intervention (or any action) had on some response variable by analyzing differences between expected and observed time series data. Data is divided in two parts: the first one is what is known as the "pre-intervention" period and the concept of [Bayesian Structural Time Series](https://en.wikipedia.org/wiki/Bayesian_structural_time_series) is used to fit a model that best explains what has been observed. The fitted model is used in the second part of data ("post-intervention" period) to forecast what the response would look like had the intervention not taken place. The inferences are based on the differences between observed response to the predicted one which yields the absolute and relative expected effect the intervention caused on data. The model makes as assumption (which is recommended to be confirmed in your data) that the response variable can be precisely modeled by a linear regression with what is known as "covariates" (or `X`) that **must not** be affected by the intervention that took place (for instance, if a company wants to infer what impact a given marketing campaign will have on its "revenue", then its daily "visits" cannot be used as a covariate as probably the total visits might be affected by the campaign. It is more commonly used to infer the impact that marketing interventions have on businesses such as the expected revenue associated to a given campaign or even to assert more precisely the revenue a given channel brings in by completely turning it off (also known as "hold-out" tests). It's important to note though that the model can be extensively used in different areas and subjects; any intervention on time series data can potentially be modeled and inferences be made upon observed and predicted data. Please refer to getting started in the `examples` folder for more information. ## Installation pip install pycausalimpact ## Requirements - python{2.7, 3.6, 3.7, 3.8} \* - numpy - scipy - statsmodels - matplotlib - jinja2 \* **We no longer support Python2.7!** Please refer to the tag `0.0.16` (`pip install pycausalimpact==0.0.16`) for the latest available supported version. ## Getting Started We recommend this [presentation](https://www.youtube.com/watch?v=GTgZfCltMm8) by Kay Brodersen (one of the creators of the causal impact implementation in R). We also created this introductory [ipython notebook](http://nbviewer.jupyter.org/github/dafiti/causalimpact/blob/master/examples/getting_started.ipynb) with examples of how to use this package. ### Simple Example Here's a simple example (which can also be found in the original Google's R implementation) running in Python: ```python import numpy as np import pandas as pd from statsmodels.tsa.arima_process import ArmaProcess from causalimpact import CausalImpact np.random.seed(12345) ar = np.r_[1, 0.9] ma = np.array([1]) arma_process = ArmaProcess(ar, ma) X = 100 + arma_process.generate_sample(nsample=100) y = 1.2 * X + np.random.normal(size=100) y[70:] += 5 data = pd.DataFrame({'y': y, 'X': X}, columns=['y', 'X']) pre_period = [0, 69] post_period = [70, 99] ci = CausalImpact(data, pre_period, post_period) print(ci.summary()) print(ci.summary(output='report')) ci.plot() ``` ![alt text](https://raw.githubusercontent.com/dafiti/causalimpact/master/examples/ci_plot.png) ## Differences Between Python and R Packages One thing you'll notice when using this package is that sometimes results will converge to be similar to the R package output and at times it may yield different conclusions. This is a quite complex topic and we have discussed it more throroughly on the issues number [#34](https://github.com/dafiti/causalimpact/issues/34), [#37](https://github.com/dafiti/causalimpact/issues/37) and [#40](https://github.com/dafiti/causalimpact/issues/40) which we highly recommend the reading. In a nutshell, Python implementation relies on [statsmodels](https://github.com/statsmodels/statsmodels) which uses a classical Kalman Filter approach for solving the statespace equations whereas R\`s uses a Bayesian approach (from [bsts](https://github.com/cran/bsts) package) with a stochastic Kalman Filter technique; both algorithms are expected to converge to similar final statespace solution [(ref)](https://stackoverflow.com/questions/57300211/local-level-model-not-fully-optimizing-irregular-state/57316141?noredirect=1#comment101157526_57316141). Still, despite the similarities, both packages uses different assumptions for prior initalizations as well as for steps involved in the optimization process: while in R we find an approach that relies on user prior knowledge, Python uses classical statistical techniques aiming to maximize the likelihood function expressed in terms of the structural time series components. As we discuss in the previously mentioned issues, it's hard to tell which is right or "more right"; each package has its own assumptions and its own techniques making it up for the final user to decide what is appropriate or not. We recommend comparing results from both packages in your use cases to have a more general idea whether there's convergence in conclusions or not. As a final note, when using this Python package, **we highly recommend setting the prior as None** like so: ci = CausalImpact(data, pre_period, post_period, prior_level_sd=None) This will let statsmodel itself do the optimization for the prior on the local level component. If you are confident that your local level prior should be a given specific value (say `0.01`), then it's probably ok to use it there, otherwise you run the risk of obtaining sub-optimal solutions as a result. ## Contributing, Bugs, Questions Contributions are more than welcome! If you want to propose new changes, fix bugs or improve something feel free to fork the repository and send us a Pull Request. You can also open new [`Issues`](https://github.com/dafiti/causalimpact/issues) for reporting bugs and general problems. %package help Summary: Development documents and examples for pycausalimpact Provides: python3-pycausalimpact-doc %description help # Causal Impact [![Build Status](https://travis-ci.com/dafiti/causalimpact.svg?branch=master)](https://travis-ci.com/dafiti/causalimpact) [![Coverage Status](https://coveralls.io/repos/github/dafiti/causalimpact/badge.svg?branch=master)](https://coveralls.io/github/dafiti/causalimpact?branch=master) [![PyPI version](https://badge.fury.io/py/pycausalimpact.svg)](https://badge.fury.io/py/pycausalimpact) [![Pyversions](https://img.shields.io/pypi/pyversions/pycausalimpact.svg)](https://pypi.python.org/pypi/pycausalimpact) [![GitHub license](https://img.shields.io/github/license/dafiti/causalimpact.svg)](https://github.com/dafiti/causalimpact/blob/master/LICENSE) Python causal impact (or causal inference) implementation of [Google's](https://github.com/google/CausalImpact) model with all functionalities fully ported and tested. ## How it works The main goal of the algorithm is to infer the expected effect a given intervention (or any action) had on some response variable by analyzing differences between expected and observed time series data. Data is divided in two parts: the first one is what is known as the "pre-intervention" period and the concept of [Bayesian Structural Time Series](https://en.wikipedia.org/wiki/Bayesian_structural_time_series) is used to fit a model that best explains what has been observed. The fitted model is used in the second part of data ("post-intervention" period) to forecast what the response would look like had the intervention not taken place. The inferences are based on the differences between observed response to the predicted one which yields the absolute and relative expected effect the intervention caused on data. The model makes as assumption (which is recommended to be confirmed in your data) that the response variable can be precisely modeled by a linear regression with what is known as "covariates" (or `X`) that **must not** be affected by the intervention that took place (for instance, if a company wants to infer what impact a given marketing campaign will have on its "revenue", then its daily "visits" cannot be used as a covariate as probably the total visits might be affected by the campaign. It is more commonly used to infer the impact that marketing interventions have on businesses such as the expected revenue associated to a given campaign or even to assert more precisely the revenue a given channel brings in by completely turning it off (also known as "hold-out" tests). It's important to note though that the model can be extensively used in different areas and subjects; any intervention on time series data can potentially be modeled and inferences be made upon observed and predicted data. Please refer to getting started in the `examples` folder for more information. ## Installation pip install pycausalimpact ## Requirements - python{2.7, 3.6, 3.7, 3.8} \* - numpy - scipy - statsmodels - matplotlib - jinja2 \* **We no longer support Python2.7!** Please refer to the tag `0.0.16` (`pip install pycausalimpact==0.0.16`) for the latest available supported version. ## Getting Started We recommend this [presentation](https://www.youtube.com/watch?v=GTgZfCltMm8) by Kay Brodersen (one of the creators of the causal impact implementation in R). We also created this introductory [ipython notebook](http://nbviewer.jupyter.org/github/dafiti/causalimpact/blob/master/examples/getting_started.ipynb) with examples of how to use this package. ### Simple Example Here's a simple example (which can also be found in the original Google's R implementation) running in Python: ```python import numpy as np import pandas as pd from statsmodels.tsa.arima_process import ArmaProcess from causalimpact import CausalImpact np.random.seed(12345) ar = np.r_[1, 0.9] ma = np.array([1]) arma_process = ArmaProcess(ar, ma) X = 100 + arma_process.generate_sample(nsample=100) y = 1.2 * X + np.random.normal(size=100) y[70:] += 5 data = pd.DataFrame({'y': y, 'X': X}, columns=['y', 'X']) pre_period = [0, 69] post_period = [70, 99] ci = CausalImpact(data, pre_period, post_period) print(ci.summary()) print(ci.summary(output='report')) ci.plot() ``` ![alt text](https://raw.githubusercontent.com/dafiti/causalimpact/master/examples/ci_plot.png) ## Differences Between Python and R Packages One thing you'll notice when using this package is that sometimes results will converge to be similar to the R package output and at times it may yield different conclusions. This is a quite complex topic and we have discussed it more throroughly on the issues number [#34](https://github.com/dafiti/causalimpact/issues/34), [#37](https://github.com/dafiti/causalimpact/issues/37) and [#40](https://github.com/dafiti/causalimpact/issues/40) which we highly recommend the reading. In a nutshell, Python implementation relies on [statsmodels](https://github.com/statsmodels/statsmodels) which uses a classical Kalman Filter approach for solving the statespace equations whereas R\`s uses a Bayesian approach (from [bsts](https://github.com/cran/bsts) package) with a stochastic Kalman Filter technique; both algorithms are expected to converge to similar final statespace solution [(ref)](https://stackoverflow.com/questions/57300211/local-level-model-not-fully-optimizing-irregular-state/57316141?noredirect=1#comment101157526_57316141). Still, despite the similarities, both packages uses different assumptions for prior initalizations as well as for steps involved in the optimization process: while in R we find an approach that relies on user prior knowledge, Python uses classical statistical techniques aiming to maximize the likelihood function expressed in terms of the structural time series components. As we discuss in the previously mentioned issues, it's hard to tell which is right or "more right"; each package has its own assumptions and its own techniques making it up for the final user to decide what is appropriate or not. We recommend comparing results from both packages in your use cases to have a more general idea whether there's convergence in conclusions or not. As a final note, when using this Python package, **we highly recommend setting the prior as None** like so: ci = CausalImpact(data, pre_period, post_period, prior_level_sd=None) This will let statsmodel itself do the optimization for the prior on the local level component. If you are confident that your local level prior should be a given specific value (say `0.01`), then it's probably ok to use it there, otherwise you run the risk of obtaining sub-optimal solutions as a result. ## Contributing, Bugs, Questions Contributions are more than welcome! If you want to propose new changes, fix bugs or improve something feel free to fork the repository and send us a Pull Request. You can also open new [`Issues`](https://github.com/dafiti/causalimpact/issues) for reporting bugs and general problems. %prep %autosetup -n pycausalimpact-0.1.1 %build %py3_build %install %py3_install install -d -m755 %{buildroot}/%{_pkgdocdir} if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi pushd %{buildroot} if [ -d usr/lib ]; then find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/lib64 ]; then find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/bin ]; then find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/sbin ]; then find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst fi touch doclist.lst if [ -d usr/share/man ]; then find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst fi popd mv %{buildroot}/filelist.lst . mv %{buildroot}/doclist.lst . %files -n python3-pycausalimpact -f filelist.lst %dir %{python3_sitelib}/* %files help -f doclist.lst %{_docdir}/* %changelog * Tue Apr 11 2023 Python_Bot - 0.1.1-1 - Package Spec generated