summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-05-15 05:38:23 +0000
committerCoprDistGit <infra@openeuler.org>2023-05-15 05:38:23 +0000
commit45b9c6dc0140371761b60ed144f8b221363ba071 (patch)
tree778efa1d14b71c1dffa17879ea12e3be8094e91c
parentefdcd794c606d81e40e63fc1ae444577c19ea2ae (diff)
automatic import of python-dropblock
-rw-r--r--.gitignore1
-rw-r--r--python-dropblock.spec449
-rw-r--r--sources1
3 files changed, 451 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..0bd0841 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/dropblock-0.3.0.tar.gz
diff --git a/python-dropblock.spec b/python-dropblock.spec
new file mode 100644
index 0000000..a8d3911
--- /dev/null
+++ b/python-dropblock.spec
@@ -0,0 +1,449 @@
+%global _empty_manifest_terminate_build 0
+Name: python-dropblock
+Version: 0.3.0
+Release: 1
+Summary: Implementation of DropBlock: A regularization method for convolutional networks in PyTorch.
+License: MIT
+URL: https://github.com/miguelvr/dropblock
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/f4/68/ccac0d2166ba6703bfcb4e1a19bb38f76cc677c70ccc85f1914167397f8b/dropblock-0.3.0.tar.gz
+BuildArch: noarch
+
+Requires: python3-numpy
+Requires: python3-torch
+
+%description
+# DropBlock
+
+![build](https://travis-ci.org/miguelvr/dropblock.png?branch=master)
+
+
+Implementation of [DropBlock: A regularization method for convolutional networks](https://arxiv.org/pdf/1810.12890.pdf)
+in PyTorch.
+
+## Abstract
+
+Deep neural networks often work well when they are over-parameterized
+and trained with a massive amount of noise and regularization, such as
+weight decay and dropout. Although dropout is widely used as a regularization
+technique for fully connected layers, it is often less effective for convolutional layers.
+This lack of success of dropout for convolutional layers is perhaps due to the fact
+that activation units in convolutional layers are spatially correlated so
+information can still flow through convolutional networks despite dropout.
+Thus a structured form of dropout is needed to regularize convolutional networks.
+In this paper, we introduce DropBlock, a form of structured dropout, where units in a
+contiguous region of a feature map are dropped together.
+We found that applying DropBlock in skip connections in addition to the
+convolution layers increases the accuracy. Also, gradually increasing number
+of dropped units during training leads to better accuracy and more robust to hyperparameter choices.
+Extensive experiments show that DropBlock works better than dropout in regularizing
+convolutional networks. On ImageNet classification, ResNet-50 architecture with
+DropBlock achieves 78.13% accuracy, which is more than 1.6% improvement on the baseline.
+On COCO detection, DropBlock improves Average Precision of RetinaNet from 36.8% to 38.4%.
+
+
+## Installation
+
+Install directly from PyPI:
+
+ pip install dropblock
+
+or the bleeding edge version from github:
+
+ pip install git+https://github.com/miguelvr/dropblock.git#egg=dropblock
+
+**NOTE**: Implementation and tests were done in Python 3.6, if you have problems with other versions of python please open an issue.
+
+## Usage
+
+
+For 2D inputs (DropBlock2D):
+
+```python
+import torch
+from dropblock import DropBlock2D
+
+# (bsize, n_feats, height, width)
+x = torch.rand(100, 10, 16, 16)
+
+drop_block = DropBlock2D(block_size=3, drop_prob=0.3)
+regularized_x = drop_block(x)
+```
+
+For 3D inputs (DropBlock3D):
+
+```python
+import torch
+from dropblock import DropBlock3D
+
+# (bsize, n_feats, depth, height, width)
+x = torch.rand(100, 10, 16, 16, 16)
+
+drop_block = DropBlock3D(block_size=3, drop_prob=0.3)
+regularized_x = drop_block(x)
+```
+
+Scheduled Dropblock:
+
+```python
+import torch
+from dropblock import DropBlock2D, LinearScheduler
+
+# (bsize, n_feats, depth, height, width)
+loader = [torch.rand(20, 10, 16, 16) for _ in range(10)]
+
+drop_block = LinearScheduler(
+ DropBlock2D(block_size=3, drop_prob=0.),
+ start_value=0.,
+ stop_value=0.25,
+ nr_steps=5
+ )
+
+probs = []
+for x in loader:
+ drop_block.step()
+ regularized_x = drop_block(x)
+ probs.append(drop_block.dropblock.drop_prob)
+
+print(probs)
+```
+
+The drop probabilities will be:
+```
+>>> [0. , 0.0625, 0.125 , 0.1875, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25]
+```
+
+The user should include the `step()` call at the start of the batch loop,
+or at the the start of a model's `forward` call.
+
+Check [examples/resnet-cifar10.py](examples/resnet-cifar10.py) to
+see an implementation example.
+
+## Implementation details
+
+We use `drop_prob` instead of `keep_prob` as a matter of preference,
+and to keep the argument consistent with pytorch's dropout.
+Regardless, everything else should work similarly to what is described in the paper.
+
+## Benchmark
+
+Refer to [BENCHMARK.md](BENCHMARK.md)
+
+## Reference
+[Ghiasi et al., 2018] DropBlock: A regularization method for convolutional networks
+
+## TODO
+- [x] Scheduled DropBlock
+- [x] Get benchmark numbers
+- [x] Extend the concept for 3D images
+
+
+
+
+%package -n python3-dropblock
+Summary: Implementation of DropBlock: A regularization method for convolutional networks in PyTorch.
+Provides: python-dropblock
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-dropblock
+# DropBlock
+
+![build](https://travis-ci.org/miguelvr/dropblock.png?branch=master)
+
+
+Implementation of [DropBlock: A regularization method for convolutional networks](https://arxiv.org/pdf/1810.12890.pdf)
+in PyTorch.
+
+## Abstract
+
+Deep neural networks often work well when they are over-parameterized
+and trained with a massive amount of noise and regularization, such as
+weight decay and dropout. Although dropout is widely used as a regularization
+technique for fully connected layers, it is often less effective for convolutional layers.
+This lack of success of dropout for convolutional layers is perhaps due to the fact
+that activation units in convolutional layers are spatially correlated so
+information can still flow through convolutional networks despite dropout.
+Thus a structured form of dropout is needed to regularize convolutional networks.
+In this paper, we introduce DropBlock, a form of structured dropout, where units in a
+contiguous region of a feature map are dropped together.
+We found that applying DropBlock in skip connections in addition to the
+convolution layers increases the accuracy. Also, gradually increasing number
+of dropped units during training leads to better accuracy and more robust to hyperparameter choices.
+Extensive experiments show that DropBlock works better than dropout in regularizing
+convolutional networks. On ImageNet classification, ResNet-50 architecture with
+DropBlock achieves 78.13% accuracy, which is more than 1.6% improvement on the baseline.
+On COCO detection, DropBlock improves Average Precision of RetinaNet from 36.8% to 38.4%.
+
+
+## Installation
+
+Install directly from PyPI:
+
+ pip install dropblock
+
+or the bleeding edge version from github:
+
+ pip install git+https://github.com/miguelvr/dropblock.git#egg=dropblock
+
+**NOTE**: Implementation and tests were done in Python 3.6, if you have problems with other versions of python please open an issue.
+
+## Usage
+
+
+For 2D inputs (DropBlock2D):
+
+```python
+import torch
+from dropblock import DropBlock2D
+
+# (bsize, n_feats, height, width)
+x = torch.rand(100, 10, 16, 16)
+
+drop_block = DropBlock2D(block_size=3, drop_prob=0.3)
+regularized_x = drop_block(x)
+```
+
+For 3D inputs (DropBlock3D):
+
+```python
+import torch
+from dropblock import DropBlock3D
+
+# (bsize, n_feats, depth, height, width)
+x = torch.rand(100, 10, 16, 16, 16)
+
+drop_block = DropBlock3D(block_size=3, drop_prob=0.3)
+regularized_x = drop_block(x)
+```
+
+Scheduled Dropblock:
+
+```python
+import torch
+from dropblock import DropBlock2D, LinearScheduler
+
+# (bsize, n_feats, depth, height, width)
+loader = [torch.rand(20, 10, 16, 16) for _ in range(10)]
+
+drop_block = LinearScheduler(
+ DropBlock2D(block_size=3, drop_prob=0.),
+ start_value=0.,
+ stop_value=0.25,
+ nr_steps=5
+ )
+
+probs = []
+for x in loader:
+ drop_block.step()
+ regularized_x = drop_block(x)
+ probs.append(drop_block.dropblock.drop_prob)
+
+print(probs)
+```
+
+The drop probabilities will be:
+```
+>>> [0. , 0.0625, 0.125 , 0.1875, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25]
+```
+
+The user should include the `step()` call at the start of the batch loop,
+or at the the start of a model's `forward` call.
+
+Check [examples/resnet-cifar10.py](examples/resnet-cifar10.py) to
+see an implementation example.
+
+## Implementation details
+
+We use `drop_prob` instead of `keep_prob` as a matter of preference,
+and to keep the argument consistent with pytorch's dropout.
+Regardless, everything else should work similarly to what is described in the paper.
+
+## Benchmark
+
+Refer to [BENCHMARK.md](BENCHMARK.md)
+
+## Reference
+[Ghiasi et al., 2018] DropBlock: A regularization method for convolutional networks
+
+## TODO
+- [x] Scheduled DropBlock
+- [x] Get benchmark numbers
+- [x] Extend the concept for 3D images
+
+
+
+
+%package help
+Summary: Development documents and examples for dropblock
+Provides: python3-dropblock-doc
+%description help
+# DropBlock
+
+![build](https://travis-ci.org/miguelvr/dropblock.png?branch=master)
+
+
+Implementation of [DropBlock: A regularization method for convolutional networks](https://arxiv.org/pdf/1810.12890.pdf)
+in PyTorch.
+
+## Abstract
+
+Deep neural networks often work well when they are over-parameterized
+and trained with a massive amount of noise and regularization, such as
+weight decay and dropout. Although dropout is widely used as a regularization
+technique for fully connected layers, it is often less effective for convolutional layers.
+This lack of success of dropout for convolutional layers is perhaps due to the fact
+that activation units in convolutional layers are spatially correlated so
+information can still flow through convolutional networks despite dropout.
+Thus a structured form of dropout is needed to regularize convolutional networks.
+In this paper, we introduce DropBlock, a form of structured dropout, where units in a
+contiguous region of a feature map are dropped together.
+We found that applying DropBlock in skip connections in addition to the
+convolution layers increases the accuracy. Also, gradually increasing number
+of dropped units during training leads to better accuracy and more robust to hyperparameter choices.
+Extensive experiments show that DropBlock works better than dropout in regularizing
+convolutional networks. On ImageNet classification, ResNet-50 architecture with
+DropBlock achieves 78.13% accuracy, which is more than 1.6% improvement on the baseline.
+On COCO detection, DropBlock improves Average Precision of RetinaNet from 36.8% to 38.4%.
+
+
+## Installation
+
+Install directly from PyPI:
+
+ pip install dropblock
+
+or the bleeding edge version from github:
+
+ pip install git+https://github.com/miguelvr/dropblock.git#egg=dropblock
+
+**NOTE**: Implementation and tests were done in Python 3.6, if you have problems with other versions of python please open an issue.
+
+## Usage
+
+
+For 2D inputs (DropBlock2D):
+
+```python
+import torch
+from dropblock import DropBlock2D
+
+# (bsize, n_feats, height, width)
+x = torch.rand(100, 10, 16, 16)
+
+drop_block = DropBlock2D(block_size=3, drop_prob=0.3)
+regularized_x = drop_block(x)
+```
+
+For 3D inputs (DropBlock3D):
+
+```python
+import torch
+from dropblock import DropBlock3D
+
+# (bsize, n_feats, depth, height, width)
+x = torch.rand(100, 10, 16, 16, 16)
+
+drop_block = DropBlock3D(block_size=3, drop_prob=0.3)
+regularized_x = drop_block(x)
+```
+
+Scheduled Dropblock:
+
+```python
+import torch
+from dropblock import DropBlock2D, LinearScheduler
+
+# (bsize, n_feats, depth, height, width)
+loader = [torch.rand(20, 10, 16, 16) for _ in range(10)]
+
+drop_block = LinearScheduler(
+ DropBlock2D(block_size=3, drop_prob=0.),
+ start_value=0.,
+ stop_value=0.25,
+ nr_steps=5
+ )
+
+probs = []
+for x in loader:
+ drop_block.step()
+ regularized_x = drop_block(x)
+ probs.append(drop_block.dropblock.drop_prob)
+
+print(probs)
+```
+
+The drop probabilities will be:
+```
+>>> [0. , 0.0625, 0.125 , 0.1875, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25]
+```
+
+The user should include the `step()` call at the start of the batch loop,
+or at the the start of a model's `forward` call.
+
+Check [examples/resnet-cifar10.py](examples/resnet-cifar10.py) to
+see an implementation example.
+
+## Implementation details
+
+We use `drop_prob` instead of `keep_prob` as a matter of preference,
+and to keep the argument consistent with pytorch's dropout.
+Regardless, everything else should work similarly to what is described in the paper.
+
+## Benchmark
+
+Refer to [BENCHMARK.md](BENCHMARK.md)
+
+## Reference
+[Ghiasi et al., 2018] DropBlock: A regularization method for convolutional networks
+
+## TODO
+- [x] Scheduled DropBlock
+- [x] Get benchmark numbers
+- [x] Extend the concept for 3D images
+
+
+
+
+%prep
+%autosetup -n dropblock-0.3.0
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-dropblock -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Mon May 15 2023 Python_Bot <Python_Bot@openeuler.org> - 0.3.0-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..bd732a2
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+5a3e3d8d99b852fd1d5536d94ccd48f9 dropblock-0.3.0.tar.gz