From 85c17e1eb5f7f99de5bd1458ad4e07b4a0b4e6a7 Mon Sep 17 00:00:00 2001 From: CoprDistGit Date: Wed, 31 May 2023 06:58:25 +0000 Subject: automatic import of python-gym-update --- python-gym-update.spec | 331 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 331 insertions(+) create mode 100644 python-gym-update.spec (limited to 'python-gym-update.spec') diff --git a/python-gym-update.spec b/python-gym-update.spec new file mode 100644 index 0000000..a651354 --- /dev/null +++ b/python-gym-update.spec @@ -0,0 +1,331 @@ +%global _empty_manifest_terminate_build 0 +Name: python-gym-update +Version: 0.6.2 +Release: 1 +Summary: A OpenAI Gym Env for continuous control +License: MIT +URL: https://pypi.org/project/gym-update/ +Source0: https://mirrors.nju.edu.cn/pypi/web/packages/96/2e/a1b89842b773f90616ac09b0a409f5d5eb30a7e7546cf3436c1121d47d23/gym_update-0.6.2.tar.gz +BuildArch: noarch + +Requires: python3-gym + +%description +# Gym-style API environment + +## A write up +[Here's](https://www.overleaf.com/project/62b89d3b150bcf81e449aeb3) the most recent write up regarding the envoronment and algorithms applied to it. + +## Comments + +- in general, if we record a transition up to "Done" or if we update as soon as we reach "Done", the info collected is very little. Done is reached after 1 or 2 transitions. specify a different condition + + +## Environment dynamics + + + +The functions used: +- $f_e(x^s, x^a) = \mathbb{E}[Y_e|X_e(1) = (x^s, x^a)]$: Causal mechanism determining probability of $Y_e = 1$ given $X_e(1)$. We will take $f_e(x^s, x^a) = (1 + \exp^{−x^s−x^a})^{−1}$ +- $g^a_e(\rho, x^a) \in \{g : [0, 1] \times \Omega \rightarrow \Omega \}$: Intervention process on $X^a$ in response to a predictive score $\rho$ updating $X^a_e(0) \rightarrow X^a_e(1)$ +- $\rho_e(x^s, x^a) \in \{\rho_e : \Omega^s \times \Omega^a \rightarrow [0, 1]\}$: Predictive score trained at epoch $e$ + + +Additional information: +- At epoch $e$, the predictive score $\rho$ uses $X^a_e(0), X^s_e(0)$ and $Y_e$ as training data; previous epochs are ignored and $X^a_e(1), X^s_e(1)$ are not observed. The predictive score is computed at time $t=0$. +- We allow $\rho_e$ to be an arbitrary function, but generally presume it is an estimator of $\rho_e(x^s, x^a) \approx E [Y_e|X^s_e(0) = x^s, X^a_e(0) = x^a]= f_e(x^s, g^a_e(\rho_{e-1}, x^a)) \triangleq \tilde{f}_e(x^s, x^a) $ +- $\forall e f_e = E[Y_e|X_e] = E[Y_e|X_e(1)]$: $Y_e$ depends on $X_e(1)$; that is, after any potential interventions +- a higher value $\rho$ means a larger intervention is made (we assume $g^a_e$ to be deterministic, but random valued functions may more accurately capture the +uncertainty linked to real-world interventions) + + + +## Naive updating +By ‘naive’ updating it is meant that a new score $ρ_e$ is fitted in each epoch, and then used as a drop-in replacement of an existing score $ρ_{e−1}$. It leads +to estimates $\rho_e(x^s, x^a)$ converging as $e \rightarrow \infty$ to a setting in which $\rho_e$ accurately estimates its own effect: conceptually, $\rho_e(x^s, x^a)$ estimates the probability of $Y$ after interventions have been made on the basis of $\rho_e(x^s, x^a)$ itself.
+ +**EPOCH 0**
+**t=0**
+- observe a population of patients $(X_0^a(0),X_0^s(0))_{i=1}^N$ + +**t=1**
+- there are no interventions, hence $X_0^a(1) = X_0^a(0)$ +- the risk of observing $Y = 1$ depends only on covariates at $t1$ through $f_0$ and is $E[Y_0|X_0(0) = (x^s, x^a)] =f(x^s, x^a)$ +- the score $\rho_0$ is therefore defined as $\rho_0(x^s, x^a) = f(x^s, x^a)$ +- $Y_0$ is observed +- analyst decides a function $\rho_0$, which is retained into epoch 1. We will use initialized actions $\theta = (\theta^0, \theta^1, \theta^2)$ + +_The model performance under non-intervention is equivalent to performance at epoch 0_
+ +**EPOCH $>0$** +**t=0**
+- observe a new population of patients $(X_e^a(0),X_e^s(0))_{i=1}^N$ +- analyst computes $\rho_0 (X^s_e(0), Xa_e(0))$ + +**t=1**
+- $X^s_e(0)$ is not interventionable and becomes $X^s_e(1)$ +- $\rho_0$ is used to inform interventions $g^a_e$ to change values $X^a_e(1) = g_e(\rho_{e-1}(x^s, x^a), x^a)$ +- $E[Y_1]$ is determined by covariates $X^s_e(1), X^a_e(1)$ +- the score $ρ_e$ is defined as $\rho_e(x^s, x^a) = f_e(x^s, g^a_e(\rho_{e-1}(x^s, x^a), xa)) \triangleq h(\rho_{e−1} (x^s, x^a)) +- $Y_e$ is observed +- analyst decides a function $\rho_e$ using $X^s_e(1), X^a_e(1), Y_e$, which is retained into epoch $e+1$. We will use $\rho_e =(1 + exp^(−\theta^0 −x^s \theta^1 −x^a \beta^2 ))^{−1}$
+ +Then the episodes repeat
+ +## state and action spaces: +Action space: 3D space $\in [-2, 2]$. Actions represent the coefficients thetas of a logistic regression that will be run on the dataset of patients
+ +Observation space: aD space $\in [0, \infty)$. States represent values for the predictive score $f_e$
+ + + + +## To install +- git clone https://github.com/claudia-viaro/gym-update.git +- cd gym-update +- !pip install gym-update +- import gym +- import gym_update +- env =gym.make('update-v0') + +# To change version +- change version to, e.g., 1.0.7 from setup.py file +- git clone https://github.com/claudia-viaro/gym-update.git +- cd gym-update +- python setup.py sdist bdist_wheel +- twine check dist/* +- twine upload --repository-url https://upload.pypi.org/legacy/ dist/* + + + + +%package -n python3-gym-update +Summary: A OpenAI Gym Env for continuous control +Provides: python-gym-update +BuildRequires: python3-devel +BuildRequires: python3-setuptools +BuildRequires: python3-pip +%description -n python3-gym-update +# Gym-style API environment + +## A write up +[Here's](https://www.overleaf.com/project/62b89d3b150bcf81e449aeb3) the most recent write up regarding the envoronment and algorithms applied to it. + +## Comments + +- in general, if we record a transition up to "Done" or if we update as soon as we reach "Done", the info collected is very little. Done is reached after 1 or 2 transitions. specify a different condition + + +## Environment dynamics + + + +The functions used: +- $f_e(x^s, x^a) = \mathbb{E}[Y_e|X_e(1) = (x^s, x^a)]$: Causal mechanism determining probability of $Y_e = 1$ given $X_e(1)$. We will take $f_e(x^s, x^a) = (1 + \exp^{−x^s−x^a})^{−1}$ +- $g^a_e(\rho, x^a) \in \{g : [0, 1] \times \Omega \rightarrow \Omega \}$: Intervention process on $X^a$ in response to a predictive score $\rho$ updating $X^a_e(0) \rightarrow X^a_e(1)$ +- $\rho_e(x^s, x^a) \in \{\rho_e : \Omega^s \times \Omega^a \rightarrow [0, 1]\}$: Predictive score trained at epoch $e$ + + +Additional information: +- At epoch $e$, the predictive score $\rho$ uses $X^a_e(0), X^s_e(0)$ and $Y_e$ as training data; previous epochs are ignored and $X^a_e(1), X^s_e(1)$ are not observed. The predictive score is computed at time $t=0$. +- We allow $\rho_e$ to be an arbitrary function, but generally presume it is an estimator of $\rho_e(x^s, x^a) \approx E [Y_e|X^s_e(0) = x^s, X^a_e(0) = x^a]= f_e(x^s, g^a_e(\rho_{e-1}, x^a)) \triangleq \tilde{f}_e(x^s, x^a) $ +- $\forall e f_e = E[Y_e|X_e] = E[Y_e|X_e(1)]$: $Y_e$ depends on $X_e(1)$; that is, after any potential interventions +- a higher value $\rho$ means a larger intervention is made (we assume $g^a_e$ to be deterministic, but random valued functions may more accurately capture the +uncertainty linked to real-world interventions) + + + +## Naive updating +By ‘naive’ updating it is meant that a new score $ρ_e$ is fitted in each epoch, and then used as a drop-in replacement of an existing score $ρ_{e−1}$. It leads +to estimates $\rho_e(x^s, x^a)$ converging as $e \rightarrow \infty$ to a setting in which $\rho_e$ accurately estimates its own effect: conceptually, $\rho_e(x^s, x^a)$ estimates the probability of $Y$ after interventions have been made on the basis of $\rho_e(x^s, x^a)$ itself.
+ +**EPOCH 0**
+**t=0**
+- observe a population of patients $(X_0^a(0),X_0^s(0))_{i=1}^N$ + +**t=1**
+- there are no interventions, hence $X_0^a(1) = X_0^a(0)$ +- the risk of observing $Y = 1$ depends only on covariates at $t1$ through $f_0$ and is $E[Y_0|X_0(0) = (x^s, x^a)] =f(x^s, x^a)$ +- the score $\rho_0$ is therefore defined as $\rho_0(x^s, x^a) = f(x^s, x^a)$ +- $Y_0$ is observed +- analyst decides a function $\rho_0$, which is retained into epoch 1. We will use initialized actions $\theta = (\theta^0, \theta^1, \theta^2)$ + +_The model performance under non-intervention is equivalent to performance at epoch 0_
+ +**EPOCH $>0$** +**t=0**
+- observe a new population of patients $(X_e^a(0),X_e^s(0))_{i=1}^N$ +- analyst computes $\rho_0 (X^s_e(0), Xa_e(0))$ + +**t=1**
+- $X^s_e(0)$ is not interventionable and becomes $X^s_e(1)$ +- $\rho_0$ is used to inform interventions $g^a_e$ to change values $X^a_e(1) = g_e(\rho_{e-1}(x^s, x^a), x^a)$ +- $E[Y_1]$ is determined by covariates $X^s_e(1), X^a_e(1)$ +- the score $ρ_e$ is defined as $\rho_e(x^s, x^a) = f_e(x^s, g^a_e(\rho_{e-1}(x^s, x^a), xa)) \triangleq h(\rho_{e−1} (x^s, x^a)) +- $Y_e$ is observed +- analyst decides a function $\rho_e$ using $X^s_e(1), X^a_e(1), Y_e$, which is retained into epoch $e+1$. We will use $\rho_e =(1 + exp^(−\theta^0 −x^s \theta^1 −x^a \beta^2 ))^{−1}$
+ +Then the episodes repeat
+ +## state and action spaces: +Action space: 3D space $\in [-2, 2]$. Actions represent the coefficients thetas of a logistic regression that will be run on the dataset of patients
+ +Observation space: aD space $\in [0, \infty)$. States represent values for the predictive score $f_e$
+ + + + +## To install +- git clone https://github.com/claudia-viaro/gym-update.git +- cd gym-update +- !pip install gym-update +- import gym +- import gym_update +- env =gym.make('update-v0') + +# To change version +- change version to, e.g., 1.0.7 from setup.py file +- git clone https://github.com/claudia-viaro/gym-update.git +- cd gym-update +- python setup.py sdist bdist_wheel +- twine check dist/* +- twine upload --repository-url https://upload.pypi.org/legacy/ dist/* + + + + +%package help +Summary: Development documents and examples for gym-update +Provides: python3-gym-update-doc +%description help +# Gym-style API environment + +## A write up +[Here's](https://www.overleaf.com/project/62b89d3b150bcf81e449aeb3) the most recent write up regarding the envoronment and algorithms applied to it. + +## Comments + +- in general, if we record a transition up to "Done" or if we update as soon as we reach "Done", the info collected is very little. Done is reached after 1 or 2 transitions. specify a different condition + + +## Environment dynamics + + + +The functions used: +- $f_e(x^s, x^a) = \mathbb{E}[Y_e|X_e(1) = (x^s, x^a)]$: Causal mechanism determining probability of $Y_e = 1$ given $X_e(1)$. We will take $f_e(x^s, x^a) = (1 + \exp^{−x^s−x^a})^{−1}$ +- $g^a_e(\rho, x^a) \in \{g : [0, 1] \times \Omega \rightarrow \Omega \}$: Intervention process on $X^a$ in response to a predictive score $\rho$ updating $X^a_e(0) \rightarrow X^a_e(1)$ +- $\rho_e(x^s, x^a) \in \{\rho_e : \Omega^s \times \Omega^a \rightarrow [0, 1]\}$: Predictive score trained at epoch $e$ + + +Additional information: +- At epoch $e$, the predictive score $\rho$ uses $X^a_e(0), X^s_e(0)$ and $Y_e$ as training data; previous epochs are ignored and $X^a_e(1), X^s_e(1)$ are not observed. The predictive score is computed at time $t=0$. +- We allow $\rho_e$ to be an arbitrary function, but generally presume it is an estimator of $\rho_e(x^s, x^a) \approx E [Y_e|X^s_e(0) = x^s, X^a_e(0) = x^a]= f_e(x^s, g^a_e(\rho_{e-1}, x^a)) \triangleq \tilde{f}_e(x^s, x^a) $ +- $\forall e f_e = E[Y_e|X_e] = E[Y_e|X_e(1)]$: $Y_e$ depends on $X_e(1)$; that is, after any potential interventions +- a higher value $\rho$ means a larger intervention is made (we assume $g^a_e$ to be deterministic, but random valued functions may more accurately capture the +uncertainty linked to real-world interventions) + + + +## Naive updating +By ‘naive’ updating it is meant that a new score $ρ_e$ is fitted in each epoch, and then used as a drop-in replacement of an existing score $ρ_{e−1}$. It leads +to estimates $\rho_e(x^s, x^a)$ converging as $e \rightarrow \infty$ to a setting in which $\rho_e$ accurately estimates its own effect: conceptually, $\rho_e(x^s, x^a)$ estimates the probability of $Y$ after interventions have been made on the basis of $\rho_e(x^s, x^a)$ itself.
+ +**EPOCH 0**
+**t=0**
+- observe a population of patients $(X_0^a(0),X_0^s(0))_{i=1}^N$ + +**t=1**
+- there are no interventions, hence $X_0^a(1) = X_0^a(0)$ +- the risk of observing $Y = 1$ depends only on covariates at $t1$ through $f_0$ and is $E[Y_0|X_0(0) = (x^s, x^a)] =f(x^s, x^a)$ +- the score $\rho_0$ is therefore defined as $\rho_0(x^s, x^a) = f(x^s, x^a)$ +- $Y_0$ is observed +- analyst decides a function $\rho_0$, which is retained into epoch 1. We will use initialized actions $\theta = (\theta^0, \theta^1, \theta^2)$ + +_The model performance under non-intervention is equivalent to performance at epoch 0_
+ +**EPOCH $>0$** +**t=0**
+- observe a new population of patients $(X_e^a(0),X_e^s(0))_{i=1}^N$ +- analyst computes $\rho_0 (X^s_e(0), Xa_e(0))$ + +**t=1**
+- $X^s_e(0)$ is not interventionable and becomes $X^s_e(1)$ +- $\rho_0$ is used to inform interventions $g^a_e$ to change values $X^a_e(1) = g_e(\rho_{e-1}(x^s, x^a), x^a)$ +- $E[Y_1]$ is determined by covariates $X^s_e(1), X^a_e(1)$ +- the score $ρ_e$ is defined as $\rho_e(x^s, x^a) = f_e(x^s, g^a_e(\rho_{e-1}(x^s, x^a), xa)) \triangleq h(\rho_{e−1} (x^s, x^a)) +- $Y_e$ is observed +- analyst decides a function $\rho_e$ using $X^s_e(1), X^a_e(1), Y_e$, which is retained into epoch $e+1$. We will use $\rho_e =(1 + exp^(−\theta^0 −x^s \theta^1 −x^a \beta^2 ))^{−1}$
+ +Then the episodes repeat
+ +## state and action spaces: +Action space: 3D space $\in [-2, 2]$. Actions represent the coefficients thetas of a logistic regression that will be run on the dataset of patients
+ +Observation space: aD space $\in [0, \infty)$. States represent values for the predictive score $f_e$
+ + + + +## To install +- git clone https://github.com/claudia-viaro/gym-update.git +- cd gym-update +- !pip install gym-update +- import gym +- import gym_update +- env =gym.make('update-v0') + +# To change version +- change version to, e.g., 1.0.7 from setup.py file +- git clone https://github.com/claudia-viaro/gym-update.git +- cd gym-update +- python setup.py sdist bdist_wheel +- twine check dist/* +- twine upload --repository-url https://upload.pypi.org/legacy/ dist/* + + + + +%prep +%autosetup -n gym-update-0.6.2 + +%build +%py3_build + +%install +%py3_install +install -d -m755 %{buildroot}/%{_pkgdocdir} +if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi +if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi +if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi +if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi +pushd %{buildroot} +if [ -d usr/lib ]; then + find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/lib64 ]; then + find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/bin ]; then + find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/sbin ]; then + find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst +fi +touch doclist.lst +if [ -d usr/share/man ]; then + find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst +fi +popd +mv %{buildroot}/filelist.lst . +mv %{buildroot}/doclist.lst . + +%files -n python3-gym-update -f filelist.lst +%dir %{python3_sitelib}/* + +%files help -f doclist.lst +%{_docdir}/* + +%changelog +* Wed May 31 2023 Python_Bot - 0.6.2-1 +- Package Spec generated -- cgit v1.2.3