summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-04-10 10:23:32 +0000
committerCoprDistGit <infra@openeuler.org>2023-04-10 10:23:32 +0000
commit50e872d5919a7406cf8971b5fd8e056c478a5ae2 (patch)
treef0f7a2823546f96fa7d875db5b84a4f3d928375d
parentaaed7d883334018cb3c12b8a6e9c4bdff838d2de (diff)
automatic import of python-dopamine-rl
-rw-r--r--.gitignore1
-rw-r--r--python-dopamine-rl.spec620
-rw-r--r--sources1
3 files changed, 622 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..9ad92d2 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/dopamine_rl-4.0.6.tar.gz
diff --git a/python-dopamine-rl.spec b/python-dopamine-rl.spec
new file mode 100644
index 0000000..652308d
--- /dev/null
+++ b/python-dopamine-rl.spec
@@ -0,0 +1,620 @@
+%global _empty_manifest_terminate_build 0
+Name: python-dopamine-rl
+Version: 4.0.6
+Release: 1
+Summary: Dopamine: A framework for flexible Reinforcement Learning research
+License: Apache 2.0
+URL: https://github.com/google/dopamine
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/ec/ec/ab07ca64802f209f7dc23a653c91015fb7459fba60866279684ded589725/dopamine_rl-4.0.6.tar.gz
+BuildArch: noarch
+
+Requires: python3-tensorflow
+Requires: python3-gin-config
+Requires: python3-absl-py
+Requires: python3-opencv-python
+Requires: python3-gym
+Requires: python3-flax
+Requires: python3-jax
+Requires: python3-jaxlib
+Requires: python3-Pillow
+Requires: python3-numpy
+Requires: python3-pygame
+Requires: python3-pandas
+Requires: python3-tf-slim
+Requires: python3-tensorflow-probability
+
+%description
+# Dopamine
+[Getting Started](#getting-started) |
+[Docs][docs] |
+[Baseline Results][baselines] |
+[Changelist](https://google.github.io/dopamine/docs/changelist)
+
+<div align="center">
+ <img src="https://google.github.io/dopamine/images/dopamine_logo.png"><br><br>
+</div>
+
+Dopamine is a research framework for fast prototyping of reinforcement learning
+algorithms. It aims to fill the need for a small, easily grokked codebase in
+which users can freely experiment with wild ideas (speculative research).
+
+Our design principles are:
+
+* _Easy experimentation_: Make it easy for new users to run benchmark
+ experiments.
+* _Flexible development_: Make it easy for new users to try out research ideas.
+* _Compact and reliable_: Provide implementations for a few, battle-tested
+ algorithms.
+* _Reproducible_: Facilitate reproducibility in results. In particular, our
+ setup follows the recommendations given by
+ [Machado et al. (2018)][machado].
+
+Dopamine supports the following agents, implemented with jax:
+
+* DQN ([Mnih et al., 2015][dqn])
+* C51 ([Bellemare et al., 2017][c51])
+* Rainbow ([Hessel et al., 2018][rainbow])
+* IQN ([Dabney et al., 2018][iqn])
+* SAC ([Haarnoja et al., 2018][sac])
+
+For more information on the available agents, see the [docs](https://google.github.io/dopamine/docs).
+
+Many of these agents also have a tensorflow (legacy) implementation, though
+newly added agents are likely to be jax-only.
+
+This is not an official Google product.
+
+## Getting Started
+
+
+We provide docker containers for using Dopamine.
+Instructions can be found [here](https://google.github.io/dopamine/docker/).
+
+Alternatively, Dopamine can be installed from source (preferred) or installed
+with pip. For either of these methods, continue reading at prerequisites.
+
+### Prerequisites
+
+Dopamine supports Atari environments and Mujoco environments. Install the
+environments you intend to use before you install Dopamine:
+
+**Atari**
+
+1. Install the atari roms following the instructions from
+[atari-py](https://github.com/openai/atari-py#roms).
+2. `pip install ale-py` (we recommend using a [virtual environment](virtualenv)):
+3. `unzip $ROM_DIR/ROMS.zip -d $ROM_DIR && ale-import-roms $ROM_DIR/ROMS`
+(replace $ROM_DIR with the directory you extracted the ROMs to).
+
+**Mujoco**
+
+1. Install Mujoco and get a license
+[here](https://github.com/openai/mujoco-py#install-mujoco).
+2. Run `pip install mujoco-py` (we recommend using a
+[virtual environment](virtualenv)).
+
+### Installing from Source
+
+
+The most common way to use Dopamine is to install it from source and modify
+the source code directly:
+
+```
+git clone https://github.com/google/dopamine
+```
+
+After cloning, install dependencies:
+
+```
+pip install -r dopamine/requirements.txt
+```
+
+Dopamine supports tensorflow (legacy) and jax (actively maintained) agents.
+View the [Tensorflow documentation](https://www.tensorflow.org/install) for
+more information on installing tensorflow.
+
+Note: We recommend using a [virtual environment](virtualenv) when working with Dopamine.
+
+### Installing with Pip
+
+Note: We strongly recommend installing from source for most users.
+
+Installing with pip is simple, but Dopamine is designed to be modified
+directly. We recommend installing from source for writing your own experiments.
+
+```
+pip install dopamine-rl
+```
+
+### Running tests
+
+You can test whether the installation was successful by running the following
+from the dopamine root directory.
+
+```
+export PYTHONPATH=$PYTHONPATH:$PWD
+python -m tests.dopamine.atari_init_test
+```
+
+## Next Steps
+
+View the [docs][docs] for more information on training agents.
+
+We supply [baselines][baselines] for each Dopamine agent.
+
+We also provide a set of [Colaboratory notebooks](https://github.com/google/dopamine/tree/master/dopamine/colab)
+which demonstrate how to use Dopamine.
+
+## References
+
+[Bellemare et al., *The Arcade Learning Environment: An evaluation platform for
+general agents*. Journal of Artificial Intelligence Research, 2013.][ale]
+
+[Machado et al., *Revisiting the Arcade Learning Environment: Evaluation
+Protocols and Open Problems for General Agents*, Journal of Artificial
+Intelligence Research, 2018.][machado]
+
+[Hessel et al., *Rainbow: Combining Improvements in Deep Reinforcement Learning*.
+Proceedings of the AAAI Conference on Artificial Intelligence, 2018.][rainbow]
+
+[Mnih et al., *Human-level Control through Deep Reinforcement Learning*. Nature,
+2015.][dqn]
+
+[Schaul et al., *Prioritized Experience Replay*. Proceedings of the International
+Conference on Learning Representations, 2016.][prioritized_replay]
+
+[Haarnoja et al., *Soft Actor-Critic Algorithms and Applications*,
+arXiv preprint arXiv:1812.05905, 2018.][sac]
+
+## Giving credit
+
+If you use Dopamine in your work, we ask that you cite our
+[white paper][dopamine_paper]. Here is an example BibTeX entry:
+
+```
+@article{castro18dopamine,
+ author = {Pablo Samuel Castro and
+ Subhodeep Moitra and
+ Carles Gelada and
+ Saurabh Kumar and
+ Marc G. Bellemare},
+ title = {Dopamine: {A} {R}esearch {F}ramework for {D}eep {R}einforcement {L}earning},
+ year = {2018},
+ url = {http://arxiv.org/abs/1812.06110},
+ archivePrefix = {arXiv}
+}
+```
+
+
+
+[docs]: https://google.github.io/dopamine/docs/
+[baselines]: https://google.github.io/dopamine/baselines
+[machado]: https://jair.org/index.php/jair/article/view/11182
+[ale]: https://jair.org/index.php/jair/article/view/10819
+[dqn]: https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf
+[a3c]: http://proceedings.mlr.press/v48/mniha16.html
+[prioritized_replay]: https://arxiv.org/abs/1511.05952
+[c51]: http://proceedings.mlr.press/v70/bellemare17a.html
+[rainbow]: https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/download/17204/16680
+[iqn]: https://arxiv.org/abs/1806.06923
+[sac]: https://arxiv.org/abs/1812.05905
+[dopamine_paper]: https://arxiv.org/abs/1812.06110
+[vitualenv]: https://docs.python.org/3/library/venv.html#creating-virtual-environments
+
+
+
+
+%package -n python3-dopamine-rl
+Summary: Dopamine: A framework for flexible Reinforcement Learning research
+Provides: python-dopamine-rl
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-dopamine-rl
+# Dopamine
+[Getting Started](#getting-started) |
+[Docs][docs] |
+[Baseline Results][baselines] |
+[Changelist](https://google.github.io/dopamine/docs/changelist)
+
+<div align="center">
+ <img src="https://google.github.io/dopamine/images/dopamine_logo.png"><br><br>
+</div>
+
+Dopamine is a research framework for fast prototyping of reinforcement learning
+algorithms. It aims to fill the need for a small, easily grokked codebase in
+which users can freely experiment with wild ideas (speculative research).
+
+Our design principles are:
+
+* _Easy experimentation_: Make it easy for new users to run benchmark
+ experiments.
+* _Flexible development_: Make it easy for new users to try out research ideas.
+* _Compact and reliable_: Provide implementations for a few, battle-tested
+ algorithms.
+* _Reproducible_: Facilitate reproducibility in results. In particular, our
+ setup follows the recommendations given by
+ [Machado et al. (2018)][machado].
+
+Dopamine supports the following agents, implemented with jax:
+
+* DQN ([Mnih et al., 2015][dqn])
+* C51 ([Bellemare et al., 2017][c51])
+* Rainbow ([Hessel et al., 2018][rainbow])
+* IQN ([Dabney et al., 2018][iqn])
+* SAC ([Haarnoja et al., 2018][sac])
+
+For more information on the available agents, see the [docs](https://google.github.io/dopamine/docs).
+
+Many of these agents also have a tensorflow (legacy) implementation, though
+newly added agents are likely to be jax-only.
+
+This is not an official Google product.
+
+## Getting Started
+
+
+We provide docker containers for using Dopamine.
+Instructions can be found [here](https://google.github.io/dopamine/docker/).
+
+Alternatively, Dopamine can be installed from source (preferred) or installed
+with pip. For either of these methods, continue reading at prerequisites.
+
+### Prerequisites
+
+Dopamine supports Atari environments and Mujoco environments. Install the
+environments you intend to use before you install Dopamine:
+
+**Atari**
+
+1. Install the atari roms following the instructions from
+[atari-py](https://github.com/openai/atari-py#roms).
+2. `pip install ale-py` (we recommend using a [virtual environment](virtualenv)):
+3. `unzip $ROM_DIR/ROMS.zip -d $ROM_DIR && ale-import-roms $ROM_DIR/ROMS`
+(replace $ROM_DIR with the directory you extracted the ROMs to).
+
+**Mujoco**
+
+1. Install Mujoco and get a license
+[here](https://github.com/openai/mujoco-py#install-mujoco).
+2. Run `pip install mujoco-py` (we recommend using a
+[virtual environment](virtualenv)).
+
+### Installing from Source
+
+
+The most common way to use Dopamine is to install it from source and modify
+the source code directly:
+
+```
+git clone https://github.com/google/dopamine
+```
+
+After cloning, install dependencies:
+
+```
+pip install -r dopamine/requirements.txt
+```
+
+Dopamine supports tensorflow (legacy) and jax (actively maintained) agents.
+View the [Tensorflow documentation](https://www.tensorflow.org/install) for
+more information on installing tensorflow.
+
+Note: We recommend using a [virtual environment](virtualenv) when working with Dopamine.
+
+### Installing with Pip
+
+Note: We strongly recommend installing from source for most users.
+
+Installing with pip is simple, but Dopamine is designed to be modified
+directly. We recommend installing from source for writing your own experiments.
+
+```
+pip install dopamine-rl
+```
+
+### Running tests
+
+You can test whether the installation was successful by running the following
+from the dopamine root directory.
+
+```
+export PYTHONPATH=$PYTHONPATH:$PWD
+python -m tests.dopamine.atari_init_test
+```
+
+## Next Steps
+
+View the [docs][docs] for more information on training agents.
+
+We supply [baselines][baselines] for each Dopamine agent.
+
+We also provide a set of [Colaboratory notebooks](https://github.com/google/dopamine/tree/master/dopamine/colab)
+which demonstrate how to use Dopamine.
+
+## References
+
+[Bellemare et al., *The Arcade Learning Environment: An evaluation platform for
+general agents*. Journal of Artificial Intelligence Research, 2013.][ale]
+
+[Machado et al., *Revisiting the Arcade Learning Environment: Evaluation
+Protocols and Open Problems for General Agents*, Journal of Artificial
+Intelligence Research, 2018.][machado]
+
+[Hessel et al., *Rainbow: Combining Improvements in Deep Reinforcement Learning*.
+Proceedings of the AAAI Conference on Artificial Intelligence, 2018.][rainbow]
+
+[Mnih et al., *Human-level Control through Deep Reinforcement Learning*. Nature,
+2015.][dqn]
+
+[Schaul et al., *Prioritized Experience Replay*. Proceedings of the International
+Conference on Learning Representations, 2016.][prioritized_replay]
+
+[Haarnoja et al., *Soft Actor-Critic Algorithms and Applications*,
+arXiv preprint arXiv:1812.05905, 2018.][sac]
+
+## Giving credit
+
+If you use Dopamine in your work, we ask that you cite our
+[white paper][dopamine_paper]. Here is an example BibTeX entry:
+
+```
+@article{castro18dopamine,
+ author = {Pablo Samuel Castro and
+ Subhodeep Moitra and
+ Carles Gelada and
+ Saurabh Kumar and
+ Marc G. Bellemare},
+ title = {Dopamine: {A} {R}esearch {F}ramework for {D}eep {R}einforcement {L}earning},
+ year = {2018},
+ url = {http://arxiv.org/abs/1812.06110},
+ archivePrefix = {arXiv}
+}
+```
+
+
+
+[docs]: https://google.github.io/dopamine/docs/
+[baselines]: https://google.github.io/dopamine/baselines
+[machado]: https://jair.org/index.php/jair/article/view/11182
+[ale]: https://jair.org/index.php/jair/article/view/10819
+[dqn]: https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf
+[a3c]: http://proceedings.mlr.press/v48/mniha16.html
+[prioritized_replay]: https://arxiv.org/abs/1511.05952
+[c51]: http://proceedings.mlr.press/v70/bellemare17a.html
+[rainbow]: https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/download/17204/16680
+[iqn]: https://arxiv.org/abs/1806.06923
+[sac]: https://arxiv.org/abs/1812.05905
+[dopamine_paper]: https://arxiv.org/abs/1812.06110
+[vitualenv]: https://docs.python.org/3/library/venv.html#creating-virtual-environments
+
+
+
+
+%package help
+Summary: Development documents and examples for dopamine-rl
+Provides: python3-dopamine-rl-doc
+%description help
+# Dopamine
+[Getting Started](#getting-started) |
+[Docs][docs] |
+[Baseline Results][baselines] |
+[Changelist](https://google.github.io/dopamine/docs/changelist)
+
+<div align="center">
+ <img src="https://google.github.io/dopamine/images/dopamine_logo.png"><br><br>
+</div>
+
+Dopamine is a research framework for fast prototyping of reinforcement learning
+algorithms. It aims to fill the need for a small, easily grokked codebase in
+which users can freely experiment with wild ideas (speculative research).
+
+Our design principles are:
+
+* _Easy experimentation_: Make it easy for new users to run benchmark
+ experiments.
+* _Flexible development_: Make it easy for new users to try out research ideas.
+* _Compact and reliable_: Provide implementations for a few, battle-tested
+ algorithms.
+* _Reproducible_: Facilitate reproducibility in results. In particular, our
+ setup follows the recommendations given by
+ [Machado et al. (2018)][machado].
+
+Dopamine supports the following agents, implemented with jax:
+
+* DQN ([Mnih et al., 2015][dqn])
+* C51 ([Bellemare et al., 2017][c51])
+* Rainbow ([Hessel et al., 2018][rainbow])
+* IQN ([Dabney et al., 2018][iqn])
+* SAC ([Haarnoja et al., 2018][sac])
+
+For more information on the available agents, see the [docs](https://google.github.io/dopamine/docs).
+
+Many of these agents also have a tensorflow (legacy) implementation, though
+newly added agents are likely to be jax-only.
+
+This is not an official Google product.
+
+## Getting Started
+
+
+We provide docker containers for using Dopamine.
+Instructions can be found [here](https://google.github.io/dopamine/docker/).
+
+Alternatively, Dopamine can be installed from source (preferred) or installed
+with pip. For either of these methods, continue reading at prerequisites.
+
+### Prerequisites
+
+Dopamine supports Atari environments and Mujoco environments. Install the
+environments you intend to use before you install Dopamine:
+
+**Atari**
+
+1. Install the atari roms following the instructions from
+[atari-py](https://github.com/openai/atari-py#roms).
+2. `pip install ale-py` (we recommend using a [virtual environment](virtualenv)):
+3. `unzip $ROM_DIR/ROMS.zip -d $ROM_DIR && ale-import-roms $ROM_DIR/ROMS`
+(replace $ROM_DIR with the directory you extracted the ROMs to).
+
+**Mujoco**
+
+1. Install Mujoco and get a license
+[here](https://github.com/openai/mujoco-py#install-mujoco).
+2. Run `pip install mujoco-py` (we recommend using a
+[virtual environment](virtualenv)).
+
+### Installing from Source
+
+
+The most common way to use Dopamine is to install it from source and modify
+the source code directly:
+
+```
+git clone https://github.com/google/dopamine
+```
+
+After cloning, install dependencies:
+
+```
+pip install -r dopamine/requirements.txt
+```
+
+Dopamine supports tensorflow (legacy) and jax (actively maintained) agents.
+View the [Tensorflow documentation](https://www.tensorflow.org/install) for
+more information on installing tensorflow.
+
+Note: We recommend using a [virtual environment](virtualenv) when working with Dopamine.
+
+### Installing with Pip
+
+Note: We strongly recommend installing from source for most users.
+
+Installing with pip is simple, but Dopamine is designed to be modified
+directly. We recommend installing from source for writing your own experiments.
+
+```
+pip install dopamine-rl
+```
+
+### Running tests
+
+You can test whether the installation was successful by running the following
+from the dopamine root directory.
+
+```
+export PYTHONPATH=$PYTHONPATH:$PWD
+python -m tests.dopamine.atari_init_test
+```
+
+## Next Steps
+
+View the [docs][docs] for more information on training agents.
+
+We supply [baselines][baselines] for each Dopamine agent.
+
+We also provide a set of [Colaboratory notebooks](https://github.com/google/dopamine/tree/master/dopamine/colab)
+which demonstrate how to use Dopamine.
+
+## References
+
+[Bellemare et al., *The Arcade Learning Environment: An evaluation platform for
+general agents*. Journal of Artificial Intelligence Research, 2013.][ale]
+
+[Machado et al., *Revisiting the Arcade Learning Environment: Evaluation
+Protocols and Open Problems for General Agents*, Journal of Artificial
+Intelligence Research, 2018.][machado]
+
+[Hessel et al., *Rainbow: Combining Improvements in Deep Reinforcement Learning*.
+Proceedings of the AAAI Conference on Artificial Intelligence, 2018.][rainbow]
+
+[Mnih et al., *Human-level Control through Deep Reinforcement Learning*. Nature,
+2015.][dqn]
+
+[Schaul et al., *Prioritized Experience Replay*. Proceedings of the International
+Conference on Learning Representations, 2016.][prioritized_replay]
+
+[Haarnoja et al., *Soft Actor-Critic Algorithms and Applications*,
+arXiv preprint arXiv:1812.05905, 2018.][sac]
+
+## Giving credit
+
+If you use Dopamine in your work, we ask that you cite our
+[white paper][dopamine_paper]. Here is an example BibTeX entry:
+
+```
+@article{castro18dopamine,
+ author = {Pablo Samuel Castro and
+ Subhodeep Moitra and
+ Carles Gelada and
+ Saurabh Kumar and
+ Marc G. Bellemare},
+ title = {Dopamine: {A} {R}esearch {F}ramework for {D}eep {R}einforcement {L}earning},
+ year = {2018},
+ url = {http://arxiv.org/abs/1812.06110},
+ archivePrefix = {arXiv}
+}
+```
+
+
+
+[docs]: https://google.github.io/dopamine/docs/
+[baselines]: https://google.github.io/dopamine/baselines
+[machado]: https://jair.org/index.php/jair/article/view/11182
+[ale]: https://jair.org/index.php/jair/article/view/10819
+[dqn]: https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf
+[a3c]: http://proceedings.mlr.press/v48/mniha16.html
+[prioritized_replay]: https://arxiv.org/abs/1511.05952
+[c51]: http://proceedings.mlr.press/v70/bellemare17a.html
+[rainbow]: https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/download/17204/16680
+[iqn]: https://arxiv.org/abs/1806.06923
+[sac]: https://arxiv.org/abs/1812.05905
+[dopamine_paper]: https://arxiv.org/abs/1812.06110
+[vitualenv]: https://docs.python.org/3/library/venv.html#creating-virtual-environments
+
+
+
+
+%prep
+%autosetup -n dopamine-rl-4.0.6
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-dopamine-rl -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Mon Apr 10 2023 Python_Bot <Python_Bot@openeuler.org> - 4.0.6-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..16be310
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+358f021fdfedb26f47f313ab2de9a71b dopamine_rl-4.0.6.tar.gz