%global _empty_manifest_terminate_build 0 Name: python-stable-baselines Version: 2.10.2 Release: 1 Summary: A fork of OpenAI Baselines, implementations of reinforcement learning algorithms. License: MIT URL: https://github.com/hill-a/stable-baselines Source0: https://mirrors.nju.edu.cn/pypi/web/packages/ba/e5/b59ee753d93632fd28d15acaf5043e8cd1d14385191f0ab843f277c00a5d/stable_baselines-2.10.2.tar.gz BuildArch: noarch Requires: python3-gym[atari,classic_control] Requires: python3-scipy Requires: python3-joblib Requires: python3-cloudpickle Requires: python3-opencv-python Requires: python3-numpy Requires: python3-pandas Requires: python3-matplotlib Requires: python3-sphinx Requires: python3-sphinx-autobuild Requires: python3-sphinx-rtd-theme Requires: python3-mpi4py Requires: python3-pytest Requires: python3-pytest-cov Requires: python3-pytest-env Requires: python3-pytest-xdist Requires: python3-pytype %description **WARNING: This package is in maintenance mode, please use [Stable-Baselines3 (SB3)](https://github.com/DLR-RM/stable-baselines3) for an up-to-date version. You can find a [migration guide](https://stable-baselines3.readthedocs.io/en/master/guide/migration.html) in SB3 documentation.** [![Build Status](https://travis-ci.com/hill-a/stable-baselines.svg?branch=master)](https://travis-ci.com/hill-a/stable-baselines) [![Documentation Status](https://readthedocs.org/projects/stable-baselines/badge/?version=master)](https://stable-baselines.readthedocs.io/en/master/?badge=master) [![Codacy Badge](https://api.codacy.com/project/badge/Grade/3bcb4cd6d76a4270acb16b5fe6dd9efa)](https://www.codacy.com/app/baselines_janitors/stable-baselines?utm_source=github.com&utm_medium=referral&utm_content=hill-a/stable-baselines&utm_campaign=Badge_Grade) [![Codacy Badge](https://api.codacy.com/project/badge/Coverage/3bcb4cd6d76a4270acb16b5fe6dd9efa)](https://www.codacy.com/app/baselines_janitors/stable-baselines?utm_source=github.com&utm_medium=referral&utm_content=hill-a/stable-baselines&utm_campaign=Badge_Coverage) # Stable Baselines Stable Baselines is a set of improved implementations of reinforcement learning algorithms based on OpenAI [Baselines](https://github.com/openai/baselines/). These algorithms will make it easier for the research community and industry to replicate, refine, and identify new ideas, and will create good baselines to build projects on top of. We expect these tools will be used as a base around which new ideas can be added, and as a tool for comparing a new approach against existing ones. We also hope that the simplicity of these tools will allow beginners to experiment with a more advanced toolset, without being buried in implementation details. ## Main differences with OpenAI Baselines This toolset is a fork of OpenAI Baselines, with a major structural refactoring, and code cleanups: - Unified structure for all algorithms - PEP8 compliant (unified code style) - Documented functions and classes - More tests & more code coverage - Additional algorithms: SAC and TD3 (+ HER support for DQN, DDPG, SAC and TD3) ## Links Repository: https://github.com/hill-a/stable-baselines Medium article: https://medium.com/@araffin/df87c4b2fc82 Documentation: https://stable-baselines.readthedocs.io/en/master/ RL Baselines Zoo: https://github.com/araffin/rl-baselines-zoo ## Quick example Most of the library tries to follow a sklearn-like syntax for the Reinforcement Learning algorithms using Gym. Here is a quick example of how to train and run PPO2 on a cartpole environment: ```python import gym from stable_baselines.common.policies import MlpPolicy from stable_baselines.common.vec_env import DummyVecEnv from stable_baselines import PPO2 env = gym.make('CartPole-v1') # Optional: PPO2 requires a vectorized environment to run # the env is now wrapped automatically when passing it to the constructor # env = DummyVecEnv([lambda: env]) model = PPO2(MlpPolicy, env, verbose=1) model.learn(total_timesteps=10000) obs = env.reset() for i in range(1000): action, _states = model.predict(obs) obs, rewards, dones, info = env.step(action) env.render() ``` Or just train a model with a one liner if [the environment is registered in Gym](https://github.com/openai/gym/wiki/Environments) and if [the policy is registered](https://stable-baselines.readthedocs.io/en/master/guide/custom_policy.html): ```python from stable_baselines import PPO2 model = PPO2('MlpPolicy', 'CartPole-v1').learn(10000) ``` %package -n python3-stable-baselines Summary: A fork of OpenAI Baselines, implementations of reinforcement learning algorithms. Provides: python-stable-baselines BuildRequires: python3-devel BuildRequires: python3-setuptools BuildRequires: python3-pip %description -n python3-stable-baselines **WARNING: This package is in maintenance mode, please use [Stable-Baselines3 (SB3)](https://github.com/DLR-RM/stable-baselines3) for an up-to-date version. You can find a [migration guide](https://stable-baselines3.readthedocs.io/en/master/guide/migration.html) in SB3 documentation.** [![Build Status](https://travis-ci.com/hill-a/stable-baselines.svg?branch=master)](https://travis-ci.com/hill-a/stable-baselines) [![Documentation Status](https://readthedocs.org/projects/stable-baselines/badge/?version=master)](https://stable-baselines.readthedocs.io/en/master/?badge=master) [![Codacy Badge](https://api.codacy.com/project/badge/Grade/3bcb4cd6d76a4270acb16b5fe6dd9efa)](https://www.codacy.com/app/baselines_janitors/stable-baselines?utm_source=github.com&utm_medium=referral&utm_content=hill-a/stable-baselines&utm_campaign=Badge_Grade) [![Codacy Badge](https://api.codacy.com/project/badge/Coverage/3bcb4cd6d76a4270acb16b5fe6dd9efa)](https://www.codacy.com/app/baselines_janitors/stable-baselines?utm_source=github.com&utm_medium=referral&utm_content=hill-a/stable-baselines&utm_campaign=Badge_Coverage) # Stable Baselines Stable Baselines is a set of improved implementations of reinforcement learning algorithms based on OpenAI [Baselines](https://github.com/openai/baselines/). These algorithms will make it easier for the research community and industry to replicate, refine, and identify new ideas, and will create good baselines to build projects on top of. We expect these tools will be used as a base around which new ideas can be added, and as a tool for comparing a new approach against existing ones. We also hope that the simplicity of these tools will allow beginners to experiment with a more advanced toolset, without being buried in implementation details. ## Main differences with OpenAI Baselines This toolset is a fork of OpenAI Baselines, with a major structural refactoring, and code cleanups: - Unified structure for all algorithms - PEP8 compliant (unified code style) - Documented functions and classes - More tests & more code coverage - Additional algorithms: SAC and TD3 (+ HER support for DQN, DDPG, SAC and TD3) ## Links Repository: https://github.com/hill-a/stable-baselines Medium article: https://medium.com/@araffin/df87c4b2fc82 Documentation: https://stable-baselines.readthedocs.io/en/master/ RL Baselines Zoo: https://github.com/araffin/rl-baselines-zoo ## Quick example Most of the library tries to follow a sklearn-like syntax for the Reinforcement Learning algorithms using Gym. Here is a quick example of how to train and run PPO2 on a cartpole environment: ```python import gym from stable_baselines.common.policies import MlpPolicy from stable_baselines.common.vec_env import DummyVecEnv from stable_baselines import PPO2 env = gym.make('CartPole-v1') # Optional: PPO2 requires a vectorized environment to run # the env is now wrapped automatically when passing it to the constructor # env = DummyVecEnv([lambda: env]) model = PPO2(MlpPolicy, env, verbose=1) model.learn(total_timesteps=10000) obs = env.reset() for i in range(1000): action, _states = model.predict(obs) obs, rewards, dones, info = env.step(action) env.render() ``` Or just train a model with a one liner if [the environment is registered in Gym](https://github.com/openai/gym/wiki/Environments) and if [the policy is registered](https://stable-baselines.readthedocs.io/en/master/guide/custom_policy.html): ```python from stable_baselines import PPO2 model = PPO2('MlpPolicy', 'CartPole-v1').learn(10000) ``` %package help Summary: Development documents and examples for stable-baselines Provides: python3-stable-baselines-doc %description help **WARNING: This package is in maintenance mode, please use [Stable-Baselines3 (SB3)](https://github.com/DLR-RM/stable-baselines3) for an up-to-date version. You can find a [migration guide](https://stable-baselines3.readthedocs.io/en/master/guide/migration.html) in SB3 documentation.** [![Build Status](https://travis-ci.com/hill-a/stable-baselines.svg?branch=master)](https://travis-ci.com/hill-a/stable-baselines) [![Documentation Status](https://readthedocs.org/projects/stable-baselines/badge/?version=master)](https://stable-baselines.readthedocs.io/en/master/?badge=master) [![Codacy Badge](https://api.codacy.com/project/badge/Grade/3bcb4cd6d76a4270acb16b5fe6dd9efa)](https://www.codacy.com/app/baselines_janitors/stable-baselines?utm_source=github.com&utm_medium=referral&utm_content=hill-a/stable-baselines&utm_campaign=Badge_Grade) [![Codacy Badge](https://api.codacy.com/project/badge/Coverage/3bcb4cd6d76a4270acb16b5fe6dd9efa)](https://www.codacy.com/app/baselines_janitors/stable-baselines?utm_source=github.com&utm_medium=referral&utm_content=hill-a/stable-baselines&utm_campaign=Badge_Coverage) # Stable Baselines Stable Baselines is a set of improved implementations of reinforcement learning algorithms based on OpenAI [Baselines](https://github.com/openai/baselines/). These algorithms will make it easier for the research community and industry to replicate, refine, and identify new ideas, and will create good baselines to build projects on top of. We expect these tools will be used as a base around which new ideas can be added, and as a tool for comparing a new approach against existing ones. We also hope that the simplicity of these tools will allow beginners to experiment with a more advanced toolset, without being buried in implementation details. ## Main differences with OpenAI Baselines This toolset is a fork of OpenAI Baselines, with a major structural refactoring, and code cleanups: - Unified structure for all algorithms - PEP8 compliant (unified code style) - Documented functions and classes - More tests & more code coverage - Additional algorithms: SAC and TD3 (+ HER support for DQN, DDPG, SAC and TD3) ## Links Repository: https://github.com/hill-a/stable-baselines Medium article: https://medium.com/@araffin/df87c4b2fc82 Documentation: https://stable-baselines.readthedocs.io/en/master/ RL Baselines Zoo: https://github.com/araffin/rl-baselines-zoo ## Quick example Most of the library tries to follow a sklearn-like syntax for the Reinforcement Learning algorithms using Gym. Here is a quick example of how to train and run PPO2 on a cartpole environment: ```python import gym from stable_baselines.common.policies import MlpPolicy from stable_baselines.common.vec_env import DummyVecEnv from stable_baselines import PPO2 env = gym.make('CartPole-v1') # Optional: PPO2 requires a vectorized environment to run # the env is now wrapped automatically when passing it to the constructor # env = DummyVecEnv([lambda: env]) model = PPO2(MlpPolicy, env, verbose=1) model.learn(total_timesteps=10000) obs = env.reset() for i in range(1000): action, _states = model.predict(obs) obs, rewards, dones, info = env.step(action) env.render() ``` Or just train a model with a one liner if [the environment is registered in Gym](https://github.com/openai/gym/wiki/Environments) and if [the policy is registered](https://stable-baselines.readthedocs.io/en/master/guide/custom_policy.html): ```python from stable_baselines import PPO2 model = PPO2('MlpPolicy', 'CartPole-v1').learn(10000) ``` %prep %autosetup -n stable-baselines-2.10.2 %build %py3_build %install %py3_install install -d -m755 %{buildroot}/%{_pkgdocdir} if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi pushd %{buildroot} if [ -d usr/lib ]; then find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/lib64 ]; then find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/bin ]; then find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/sbin ]; then find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst fi touch doclist.lst if [ -d usr/share/man ]; then find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst fi popd mv %{buildroot}/filelist.lst . mv %{buildroot}/doclist.lst . %files -n python3-stable-baselines -f filelist.lst %dir %{python3_sitelib}/* %files help -f doclist.lst %{_docdir}/* %changelog * Tue Apr 25 2023 Python_Bot - 2.10.2-1 - Package Spec generated