1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
|
%global _empty_manifest_terminate_build 0
Name: python-pytorchrl
Version: 3.2.11
Release: 1
Summary: Disributed RL implementations with ray and pytorch.
License: MIT License
URL: https://github.com/PyTorchRL/pytorchrl/
Source0: https://mirrors.nju.edu.cn/pypi/web/packages/b8/b0/dbd0fba3ddfa2b5cd8818e85497478b857f26a843aeba5cb35aa88cc2590/pytorchrl-3.2.11.tar.gz
BuildArch: noarch
Requires: python3-gym[atari]
Requires: python3-gym[accept-rom-license]
Requires: python3-ray[default]
Requires: python3-numpy
Requires: python3-pandas
Requires: python3-scipy
Requires: python3-lz4
Requires: python3-tqdm
Requires: python3-opencv-python
Requires: python3-wandb
Requires: python3-hydra-core
%description
## PyTorchRL: A PyTorch library for reinforcement learning
Deep Reinforcement learning (DRL) has been very successful in recent years but current methods still require vast amounts of data to solve non-trivial environments. Scaling to solve more complex tasks requires frameworks that are flexible enough to allow prototyping and testing of new ideas, yet avoiding the impractically slow experimental turnaround times associated to single-threaded implementations. PyTorchRL is a pytorch-based library for DRL that allows to easily assemble RL agents using a set of core reusable and easily extendable sub-modules as building blocks. To reduce training times, PyTorchRL allows scaling agents with a parameterizable component called Scheme, that permits to define distributed architectures with great flexibility by specifying which operations should be decoupled, which should be parallelized, and how parallel tasks should be synchronized.
### Installation
```
conda create -y -n pytorchrl
conda activate pytorchrl
conda install pytorch torchvision cudatoolkit -c pytorch
pip install pytorchrl
```
### Documentation
PyTorchRL documentation can be found [here](https://pytorchrl.readthedocs.io/en/latest/).
### Citing PyTorchRL
Here is the [paper](https://arxiv.org/abs/2007.02622)
```
@misc{bou2021pytorchrl,
title={PyTorchRL: Modular and Distributed Reinforcement Learning in PyTorch},
author={Albert Bou and Gianni De Fabritiis},
year={2021},
eprint={2007.02622},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
%package -n python3-pytorchrl
Summary: Disributed RL implementations with ray and pytorch.
Provides: python-pytorchrl
BuildRequires: python3-devel
BuildRequires: python3-setuptools
BuildRequires: python3-pip
%description -n python3-pytorchrl
## PyTorchRL: A PyTorch library for reinforcement learning
Deep Reinforcement learning (DRL) has been very successful in recent years but current methods still require vast amounts of data to solve non-trivial environments. Scaling to solve more complex tasks requires frameworks that are flexible enough to allow prototyping and testing of new ideas, yet avoiding the impractically slow experimental turnaround times associated to single-threaded implementations. PyTorchRL is a pytorch-based library for DRL that allows to easily assemble RL agents using a set of core reusable and easily extendable sub-modules as building blocks. To reduce training times, PyTorchRL allows scaling agents with a parameterizable component called Scheme, that permits to define distributed architectures with great flexibility by specifying which operations should be decoupled, which should be parallelized, and how parallel tasks should be synchronized.
### Installation
```
conda create -y -n pytorchrl
conda activate pytorchrl
conda install pytorch torchvision cudatoolkit -c pytorch
pip install pytorchrl
```
### Documentation
PyTorchRL documentation can be found [here](https://pytorchrl.readthedocs.io/en/latest/).
### Citing PyTorchRL
Here is the [paper](https://arxiv.org/abs/2007.02622)
```
@misc{bou2021pytorchrl,
title={PyTorchRL: Modular and Distributed Reinforcement Learning in PyTorch},
author={Albert Bou and Gianni De Fabritiis},
year={2021},
eprint={2007.02622},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
%package help
Summary: Development documents and examples for pytorchrl
Provides: python3-pytorchrl-doc
%description help
## PyTorchRL: A PyTorch library for reinforcement learning
Deep Reinforcement learning (DRL) has been very successful in recent years but current methods still require vast amounts of data to solve non-trivial environments. Scaling to solve more complex tasks requires frameworks that are flexible enough to allow prototyping and testing of new ideas, yet avoiding the impractically slow experimental turnaround times associated to single-threaded implementations. PyTorchRL is a pytorch-based library for DRL that allows to easily assemble RL agents using a set of core reusable and easily extendable sub-modules as building blocks. To reduce training times, PyTorchRL allows scaling agents with a parameterizable component called Scheme, that permits to define distributed architectures with great flexibility by specifying which operations should be decoupled, which should be parallelized, and how parallel tasks should be synchronized.
### Installation
```
conda create -y -n pytorchrl
conda activate pytorchrl
conda install pytorch torchvision cudatoolkit -c pytorch
pip install pytorchrl
```
### Documentation
PyTorchRL documentation can be found [here](https://pytorchrl.readthedocs.io/en/latest/).
### Citing PyTorchRL
Here is the [paper](https://arxiv.org/abs/2007.02622)
```
@misc{bou2021pytorchrl,
title={PyTorchRL: Modular and Distributed Reinforcement Learning in PyTorch},
author={Albert Bou and Gianni De Fabritiis},
year={2021},
eprint={2007.02622},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
%prep
%autosetup -n pytorchrl-3.2.11
%build
%py3_build
%install
%py3_install
install -d -m755 %{buildroot}/%{_pkgdocdir}
if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
pushd %{buildroot}
if [ -d usr/lib ]; then
find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/lib64 ]; then
find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/bin ]; then
find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/sbin ]; then
find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
fi
touch doclist.lst
if [ -d usr/share/man ]; then
find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
fi
popd
mv %{buildroot}/filelist.lst .
mv %{buildroot}/doclist.lst .
%files -n python3-pytorchrl -f filelist.lst
%dir %{python3_sitelib}/*
%files help -f doclist.lst
%{_docdir}/*
%changelog
* Fri May 05 2023 Python_Bot <Python_Bot@openeuler.org> - 3.2.11-1
- Package Spec generated
|