summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--.gitignore1
-rw-r--r--python-fastrl.spec422
-rw-r--r--sources1
3 files changed, 424 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..ad7bede 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/fastrl-0.0.47.tar.gz
diff --git a/python-fastrl.spec b/python-fastrl.spec
new file mode 100644
index 0000000..72e6b13
--- /dev/null
+++ b/python-fastrl.spec
@@ -0,0 +1,422 @@
+%global _empty_manifest_terminate_build 0
+Name: python-fastrl
+Version: 0.0.47
+Release: 1
+Summary: fastrl is a reinforcement learning library that extends Fastai. This project is not affiliated with fastai or Jeremy Howard.
+License: Apache Software License 2.0
+URL: https://github.com/josiahls/fastrl/tree/main/
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/10/2e/0984dba8ba770cf443b16ee9ffec1b07daf26e578d43bb3a9c0b73b7d943/fastrl-0.0.47.tar.gz
+BuildArch: noarch
+
+Requires: python3-pip
+Requires: python3-packaging
+Requires: python3-torch
+Requires: python3-torchdata
+Requires: python3-gym
+Requires: python3-pyopengl
+Requires: python3-pyglet
+Requires: python3-tensorboard
+Requires: python3-pygame
+Requires: python3-pandas
+Requires: python3-scipy
+Requires: python3-sklearn
+Requires: python3-fastcore
+Requires: python3-fastprogress
+Requires: python3-nbformat
+Requires: python3-gym[all]
+Requires: python3-jupyterlab
+Requires: python3-nbdev
+Requires: python3-pre-commit
+Requires: python3-ipywidgets
+Requires: python3-moviepy
+Requires: python3-pygifsicle
+Requires: python3-aquirdturtle-collapsible-headings
+Requires: python3-plotly
+Requires: python3-matplotlib-inline
+Requires: python3-wheel
+Requires: python3-twine
+Requires: python3-fastdownload
+Requires: python3-watchdog[watchmedo]
+Requires: python3-graphviz
+Requires: python3-typing-extensions
+Requires: python3-spacy
+
+%description
+<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->
+[![CI
+Status](https://github.com/josiahls/fastrl/workflows/Fastrl%20Testing/badge.svg)](https://github.com/josiahls/fastrl/actions?query=workflow%3A%22Fastrl+Testing%22)
+[![pypi fastrl
+version](https://img.shields.io/pypi/v/fastrl.svg)](https://pypi.python.org/pypi/fastrl)
+[![Docker Image
+Latest](https://img.shields.io/docker/v/josiahls/fastrl?label=Docker&sort=date.png)](https://hub.docker.com/repository/docker/josiahls/fastrl)
+[![Docker Image-Dev
+Latest](https://img.shields.io/docker/v/josiahls/fastrl-dev?label=Docker%20Dev&sort=date.png)](https://hub.docker.com/repository/docker/josiahls/fastrl-dev)
+[![fastrl python
+compatibility](https://img.shields.io/pypi/pyversions/fastrl.svg)](https://pypi.python.org/pypi/fastrl)
+[![fastrl
+license](https://img.shields.io/pypi/l/fastrl.svg)](https://pypi.python.org/pypi/fastrl)
+> Warning: This is in alpha, and so uses latest torch and torchdata,
+> very importantly torchdata. The base API, while at the point of
+> semi-stability, might be changed in future versions, and so there will
+> be no promises of backward compatiblity. For the time being, it is
+> best to hard-pin versions of the library.
+> Warning: Even before fastrl==2.0.0, all Models should converge
+> reasonably fast, however HRL models `DADS` and `DIAYN` will need
+> re-balancing and some extra features that the respective authors used.
+# Overview
+Fastai for computer vision and tabular learning has been amazing. One
+would wish that this would be the same for RL. The purpose of this repo
+is to have a framework that is as easy as possible to start, but also
+designed for testing new agents.
+This version fo fastrl is basically a wrapper around
+[torchdata](https://github.com/pytorch/data).
+It is built around 4 pipeline concepts (half is from fastai):
+- DataLoading/DataBlock pipelines
+- Agent pipelines
+- Learner pipelines
+- Logger plugins
+Documentation is being served at https://josiahls.github.io/fastrl/ from
+documentation directly generated via `nbdev` in this repo.
+Basic DQN example:
+``` python
+from fastrl.loggers.core import *
+from fastrl.loggers.vscode_visualizers import *
+from fastrl.agents.dqn.basic import *
+from fastrl.agents.dqn.target import *
+from fastrl.data.block import *
+from fastrl.envs.gym import *
+import torch
+```
+``` python
+# Setup Loggers
+logger_base = ProgressBarLogger(epoch_on_pipe=EpocherCollector,
+ batch_on_pipe=BatchCollector)
+# Setup up the core NN
+torch.manual_seed(0)
+model = DQN(4,2)
+# Setup the Agent
+agent = DQNAgent(model,[logger_base],max_steps=10000)
+# Setup the DataBlock
+block = DataBlock(
+ GymTransformBlock(agent=agent,nsteps=2,nskips=2,firstlast=True), # We basically merge 2 steps into 1 and skip.
+ (GymTransformBlock(agent=agent,nsteps=2,nskips=2,firstlast=True,n=100,include_images=True),VSCodeTransformBlock())
+)
+dls = L(block.dataloaders(['CartPole-v1']*1))
+# Setup the Learner
+learner = DQNLearner(model,dls,logger_bases=[logger_base],bs=128,max_sz=20_000,nsteps=2,lr=0.001,
+ batches=1000,
+ dp_augmentation_fns=[
+ # Plugin TargetDQN code
+ TargetModelUpdater.insert_dp(),
+ TargetModelQCalc.replace_dp()
+ ])
+learner.fit(10)
+#learner.validate()
+```
+# Whats new?
+As we have learned how to support as many RL agents as possible, we
+found that `fastrl==1.*` was vastly limited in the models that it can
+support. `fastrl==2.*` will leverage the `nbdev` library for better
+documentation and more relevant testing, and `torchdata` is the base
+lib. We also will be building on the work of the `ptan`<sup>1</sup>
+library as a close reference for pytorch based reinforcement learning
+APIs.
+<sup>1</sup> “Shmuma/Ptan”. Github, 2020,
+https://github.com/Shmuma/ptan. Accessed 13 June 2020.
+## Install
+## PyPI
+Below will install the alpha build of fastrl.
+**Cuda Install**
+`pip install fastrl==0.0.* --pre --extra-index-url https://download.pytorch.org/whl/nightly/cu113`
+**Cpu Install**
+`pip install fastrl==0.0.* --pre --extra-index-url https://download.pytorch.org/whl/nightly/cpu`
+## Docker (highly recommend)
+Install:
+[Nvidia-Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker)
+Install: [docker-compose](https://docs.docker.com/compose/install/)
+``` bash
+docker-compose pull && docker-compose up
+```
+## Contributing
+After you clone this repository, please run `nbdev_install_hooks` in
+your terminal. This sets up git hooks, which clean up the notebooks to
+remove the extraneous stuff stored in the notebooks (e.g. which cells
+you ran) which causes unnecessary merge conflicts.
+Before submitting a PR, check that the local library and notebooks
+match. The script `nbdev_clean` can let you know if there is a
+difference between the local library and the notebooks. \* If you made a
+change to the notebooks in one of the exported cells, you can export it
+to the library with `nbdev_build_lib` or `make fastai2`. \* If you made
+a change to the library, you can export it back to the notebooks with
+`nbdev_update_lib`.
+
+%package -n python3-fastrl
+Summary: fastrl is a reinforcement learning library that extends Fastai. This project is not affiliated with fastai or Jeremy Howard.
+Provides: python-fastrl
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-fastrl
+<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->
+[![CI
+Status](https://github.com/josiahls/fastrl/workflows/Fastrl%20Testing/badge.svg)](https://github.com/josiahls/fastrl/actions?query=workflow%3A%22Fastrl+Testing%22)
+[![pypi fastrl
+version](https://img.shields.io/pypi/v/fastrl.svg)](https://pypi.python.org/pypi/fastrl)
+[![Docker Image
+Latest](https://img.shields.io/docker/v/josiahls/fastrl?label=Docker&sort=date.png)](https://hub.docker.com/repository/docker/josiahls/fastrl)
+[![Docker Image-Dev
+Latest](https://img.shields.io/docker/v/josiahls/fastrl-dev?label=Docker%20Dev&sort=date.png)](https://hub.docker.com/repository/docker/josiahls/fastrl-dev)
+[![fastrl python
+compatibility](https://img.shields.io/pypi/pyversions/fastrl.svg)](https://pypi.python.org/pypi/fastrl)
+[![fastrl
+license](https://img.shields.io/pypi/l/fastrl.svg)](https://pypi.python.org/pypi/fastrl)
+> Warning: This is in alpha, and so uses latest torch and torchdata,
+> very importantly torchdata. The base API, while at the point of
+> semi-stability, might be changed in future versions, and so there will
+> be no promises of backward compatiblity. For the time being, it is
+> best to hard-pin versions of the library.
+> Warning: Even before fastrl==2.0.0, all Models should converge
+> reasonably fast, however HRL models `DADS` and `DIAYN` will need
+> re-balancing and some extra features that the respective authors used.
+# Overview
+Fastai for computer vision and tabular learning has been amazing. One
+would wish that this would be the same for RL. The purpose of this repo
+is to have a framework that is as easy as possible to start, but also
+designed for testing new agents.
+This version fo fastrl is basically a wrapper around
+[torchdata](https://github.com/pytorch/data).
+It is built around 4 pipeline concepts (half is from fastai):
+- DataLoading/DataBlock pipelines
+- Agent pipelines
+- Learner pipelines
+- Logger plugins
+Documentation is being served at https://josiahls.github.io/fastrl/ from
+documentation directly generated via `nbdev` in this repo.
+Basic DQN example:
+``` python
+from fastrl.loggers.core import *
+from fastrl.loggers.vscode_visualizers import *
+from fastrl.agents.dqn.basic import *
+from fastrl.agents.dqn.target import *
+from fastrl.data.block import *
+from fastrl.envs.gym import *
+import torch
+```
+``` python
+# Setup Loggers
+logger_base = ProgressBarLogger(epoch_on_pipe=EpocherCollector,
+ batch_on_pipe=BatchCollector)
+# Setup up the core NN
+torch.manual_seed(0)
+model = DQN(4,2)
+# Setup the Agent
+agent = DQNAgent(model,[logger_base],max_steps=10000)
+# Setup the DataBlock
+block = DataBlock(
+ GymTransformBlock(agent=agent,nsteps=2,nskips=2,firstlast=True), # We basically merge 2 steps into 1 and skip.
+ (GymTransformBlock(agent=agent,nsteps=2,nskips=2,firstlast=True,n=100,include_images=True),VSCodeTransformBlock())
+)
+dls = L(block.dataloaders(['CartPole-v1']*1))
+# Setup the Learner
+learner = DQNLearner(model,dls,logger_bases=[logger_base],bs=128,max_sz=20_000,nsteps=2,lr=0.001,
+ batches=1000,
+ dp_augmentation_fns=[
+ # Plugin TargetDQN code
+ TargetModelUpdater.insert_dp(),
+ TargetModelQCalc.replace_dp()
+ ])
+learner.fit(10)
+#learner.validate()
+```
+# Whats new?
+As we have learned how to support as many RL agents as possible, we
+found that `fastrl==1.*` was vastly limited in the models that it can
+support. `fastrl==2.*` will leverage the `nbdev` library for better
+documentation and more relevant testing, and `torchdata` is the base
+lib. We also will be building on the work of the `ptan`<sup>1</sup>
+library as a close reference for pytorch based reinforcement learning
+APIs.
+<sup>1</sup> “Shmuma/Ptan”. Github, 2020,
+https://github.com/Shmuma/ptan. Accessed 13 June 2020.
+## Install
+## PyPI
+Below will install the alpha build of fastrl.
+**Cuda Install**
+`pip install fastrl==0.0.* --pre --extra-index-url https://download.pytorch.org/whl/nightly/cu113`
+**Cpu Install**
+`pip install fastrl==0.0.* --pre --extra-index-url https://download.pytorch.org/whl/nightly/cpu`
+## Docker (highly recommend)
+Install:
+[Nvidia-Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker)
+Install: [docker-compose](https://docs.docker.com/compose/install/)
+``` bash
+docker-compose pull && docker-compose up
+```
+## Contributing
+After you clone this repository, please run `nbdev_install_hooks` in
+your terminal. This sets up git hooks, which clean up the notebooks to
+remove the extraneous stuff stored in the notebooks (e.g. which cells
+you ran) which causes unnecessary merge conflicts.
+Before submitting a PR, check that the local library and notebooks
+match. The script `nbdev_clean` can let you know if there is a
+difference between the local library and the notebooks. \* If you made a
+change to the notebooks in one of the exported cells, you can export it
+to the library with `nbdev_build_lib` or `make fastai2`. \* If you made
+a change to the library, you can export it back to the notebooks with
+`nbdev_update_lib`.
+
+%package help
+Summary: Development documents and examples for fastrl
+Provides: python3-fastrl-doc
+%description help
+<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->
+[![CI
+Status](https://github.com/josiahls/fastrl/workflows/Fastrl%20Testing/badge.svg)](https://github.com/josiahls/fastrl/actions?query=workflow%3A%22Fastrl+Testing%22)
+[![pypi fastrl
+version](https://img.shields.io/pypi/v/fastrl.svg)](https://pypi.python.org/pypi/fastrl)
+[![Docker Image
+Latest](https://img.shields.io/docker/v/josiahls/fastrl?label=Docker&sort=date.png)](https://hub.docker.com/repository/docker/josiahls/fastrl)
+[![Docker Image-Dev
+Latest](https://img.shields.io/docker/v/josiahls/fastrl-dev?label=Docker%20Dev&sort=date.png)](https://hub.docker.com/repository/docker/josiahls/fastrl-dev)
+[![fastrl python
+compatibility](https://img.shields.io/pypi/pyversions/fastrl.svg)](https://pypi.python.org/pypi/fastrl)
+[![fastrl
+license](https://img.shields.io/pypi/l/fastrl.svg)](https://pypi.python.org/pypi/fastrl)
+> Warning: This is in alpha, and so uses latest torch and torchdata,
+> very importantly torchdata. The base API, while at the point of
+> semi-stability, might be changed in future versions, and so there will
+> be no promises of backward compatiblity. For the time being, it is
+> best to hard-pin versions of the library.
+> Warning: Even before fastrl==2.0.0, all Models should converge
+> reasonably fast, however HRL models `DADS` and `DIAYN` will need
+> re-balancing and some extra features that the respective authors used.
+# Overview
+Fastai for computer vision and tabular learning has been amazing. One
+would wish that this would be the same for RL. The purpose of this repo
+is to have a framework that is as easy as possible to start, but also
+designed for testing new agents.
+This version fo fastrl is basically a wrapper around
+[torchdata](https://github.com/pytorch/data).
+It is built around 4 pipeline concepts (half is from fastai):
+- DataLoading/DataBlock pipelines
+- Agent pipelines
+- Learner pipelines
+- Logger plugins
+Documentation is being served at https://josiahls.github.io/fastrl/ from
+documentation directly generated via `nbdev` in this repo.
+Basic DQN example:
+``` python
+from fastrl.loggers.core import *
+from fastrl.loggers.vscode_visualizers import *
+from fastrl.agents.dqn.basic import *
+from fastrl.agents.dqn.target import *
+from fastrl.data.block import *
+from fastrl.envs.gym import *
+import torch
+```
+``` python
+# Setup Loggers
+logger_base = ProgressBarLogger(epoch_on_pipe=EpocherCollector,
+ batch_on_pipe=BatchCollector)
+# Setup up the core NN
+torch.manual_seed(0)
+model = DQN(4,2)
+# Setup the Agent
+agent = DQNAgent(model,[logger_base],max_steps=10000)
+# Setup the DataBlock
+block = DataBlock(
+ GymTransformBlock(agent=agent,nsteps=2,nskips=2,firstlast=True), # We basically merge 2 steps into 1 and skip.
+ (GymTransformBlock(agent=agent,nsteps=2,nskips=2,firstlast=True,n=100,include_images=True),VSCodeTransformBlock())
+)
+dls = L(block.dataloaders(['CartPole-v1']*1))
+# Setup the Learner
+learner = DQNLearner(model,dls,logger_bases=[logger_base],bs=128,max_sz=20_000,nsteps=2,lr=0.001,
+ batches=1000,
+ dp_augmentation_fns=[
+ # Plugin TargetDQN code
+ TargetModelUpdater.insert_dp(),
+ TargetModelQCalc.replace_dp()
+ ])
+learner.fit(10)
+#learner.validate()
+```
+# Whats new?
+As we have learned how to support as many RL agents as possible, we
+found that `fastrl==1.*` was vastly limited in the models that it can
+support. `fastrl==2.*` will leverage the `nbdev` library for better
+documentation and more relevant testing, and `torchdata` is the base
+lib. We also will be building on the work of the `ptan`<sup>1</sup>
+library as a close reference for pytorch based reinforcement learning
+APIs.
+<sup>1</sup> “Shmuma/Ptan”. Github, 2020,
+https://github.com/Shmuma/ptan. Accessed 13 June 2020.
+## Install
+## PyPI
+Below will install the alpha build of fastrl.
+**Cuda Install**
+`pip install fastrl==0.0.* --pre --extra-index-url https://download.pytorch.org/whl/nightly/cu113`
+**Cpu Install**
+`pip install fastrl==0.0.* --pre --extra-index-url https://download.pytorch.org/whl/nightly/cpu`
+## Docker (highly recommend)
+Install:
+[Nvidia-Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker)
+Install: [docker-compose](https://docs.docker.com/compose/install/)
+``` bash
+docker-compose pull && docker-compose up
+```
+## Contributing
+After you clone this repository, please run `nbdev_install_hooks` in
+your terminal. This sets up git hooks, which clean up the notebooks to
+remove the extraneous stuff stored in the notebooks (e.g. which cells
+you ran) which causes unnecessary merge conflicts.
+Before submitting a PR, check that the local library and notebooks
+match. The script `nbdev_clean` can let you know if there is a
+difference between the local library and the notebooks. \* If you made a
+change to the notebooks in one of the exported cells, you can export it
+to the library with `nbdev_build_lib` or `make fastai2`. \* If you made
+a change to the library, you can export it back to the notebooks with
+`nbdev_update_lib`.
+
+%prep
+%autosetup -n fastrl-0.0.47
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-fastrl -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Mon May 29 2023 Python_Bot <Python_Bot@openeuler.org> - 0.0.47-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..ca2d339
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+eab196c0cecbb15260026cbcd61a7664 fastrl-0.0.47.tar.gz