diff options
| author | CoprDistGit <infra@openeuler.org> | 2023-04-11 03:04:45 +0000 |
|---|---|---|
| committer | CoprDistGit <infra@openeuler.org> | 2023-04-11 03:04:45 +0000 |
| commit | 79077326b65ed654d5195fd78cf01f2195422b85 (patch) | |
| tree | 8c281c28bb784b6d8355f05455590865d0b295d6 /python-torchfunc-nightly.spec | |
| parent | 6f35a8c9a4274453fd8240b8b048b649b82bd8a5 (diff) | |
automatic import of python-torchfunc-nightly
Diffstat (limited to 'python-torchfunc-nightly.spec')
| -rw-r--r-- | python-torchfunc-nightly.spec | 463 |
1 files changed, 463 insertions, 0 deletions
diff --git a/python-torchfunc-nightly.spec b/python-torchfunc-nightly.spec new file mode 100644 index 0000000..f197095 --- /dev/null +++ b/python-torchfunc-nightly.spec @@ -0,0 +1,463 @@ +%global _empty_manifest_terminate_build 0 +Name: python-torchfunc-nightly +Version: 1663034600 +Release: 1 +Summary: PyTorch functions to improve performance, analyse models and make your life easier. +License: MIT +URL: https://github.com/szymonmaszke/torchfunc +Source0: https://mirrors.nju.edu.cn/pypi/web/packages/d1/4f/ab187f42f60ca8a06bf1e22832fd52e9990326698bd4834baa833829f217/torchfunc-nightly-1663034600.tar.gz +BuildArch: noarch + +Requires: python3-torch + +%description +<img align="left" width="256" height="256" src="https://github.com/szymonmaszke/torchfunc/blob/master/assets/logos/medium.png"> + +* Improve and analyse performance of your neural network (e.g. Tensor Cores compatibility) +* Record/analyse internal state of `torch.nn.Module` as data passes through it +* Do the above based on external conditions (using single `Callable` to specify it) +* Day-to-day neural network related duties (model size, seeding, time measurements etc.) +* Get information about your host operating system, `torch.nn.Module` device, CUDA +capabilities etc. + + +| Version | Docs | Tests | Coverage | Style | PyPI | Python | PyTorch | Docker | Roadmap | +|---------|------|-------|----------|-------|------|--------|---------|--------|---------| +| [](https://github.com/szymonmaszke/torchfunc/releases) | [](https://szymonmaszke.github.io/torchfunc/) |  |  | [](https://codebeat.co/projects/github-com-szymonmaszke-torchfunc-master) | [](https://pypi.org/project/torchfunc/) | [](https://www.python.org/) | [](https://pytorch.org/) | [](https://hub.docker.com/r/szymonmaszke/torchfunc) | [](https://github.com/szymonmaszke/torchfunc/blob/master/ROADMAP.md) | + +# :bulb: Examples + +__Check documentation here:__ [https://szymonmaszke.github.io/torchfunc](https://szymonmaszke.github.io/torchfunc) + +## 1. Getting performance tips + +- __Get instant performance tips about your module. All problems described by comments +will be shown by `torchfunc.performance.tips`:__ + +```python +class Model(torch.nn.Module): + def __init__(self): + super().__init__() + self.convolution = torch.nn.Sequential( + torch.nn.Conv2d(1, 32, 3), + torch.nn.ReLU(inplace=True), # Inplace may harm kernel fusion + torch.nn.Conv2d(32, 128, 3, groups=32), # Depthwise is slower in PyTorch + torch.nn.ReLU(inplace=True), # Same as before + torch.nn.Conv2d(128, 250, 3), # Wrong output size for TensorCores + ) + + self.classifier = torch.nn.Sequential( + torch.nn.Linear(250, 64), # Wrong input size for TensorCores + torch.nn.ReLU(), # Fine, no info about this layer + torch.nn.Linear(64, 10), # Wrong output size for TensorCores + ) + + def forward(self, inputs): + convolved = torch.nn.AdaptiveAvgPool2d(1)(self.convolution(inputs)).flatten() + return self.classifier(convolved) + +# All you have to do +print(torchfunc.performance.tips(Model())) +``` + +## 2. Seeding, weight freezing and others + +- __Seed globaly (including `numpy` and `cuda`), freeze weights, check inference time and model size:__ + +```python +# Inb4 MNIST, you can use any module with those functions +model = torch.nn.Linear(784, 10) +torchfunc.seed(0) +frozen = torchfunc.module.freeze(model, bias=False) + +with torchfunc.Timer() as timer: + frozen(torch.randn(32, 784) + print(timer.checkpoint()) # Time since the beginning + frozen(torch.randn(128, 784) + print(timer.checkpoint()) # Since last checkpoint + +print(f"Overall time {timer}; Model size: {torchfunc.sizeof(frozen)}") +``` + +## 3. Record `torch.nn.Module` internal state + +- __Record and sum per-layer activation statistics as data passes through network:__ + +```python +# Still MNIST but any module can be put in it's place +model = torch.nn.Sequential( + torch.nn.Linear(784, 100), + torch.nn.ReLU(), + torch.nn.Linear(100, 50), + torch.nn.ReLU(), + torch.nn.Linear(50, 10), +) +# Recorder which sums all inputs to layers +recorder = torchfunc.hooks.recorders.ForwardPre(reduction=lambda x, y: x+y) +# Record only for torch.nn.Linear +recorder.children(model, types=(torch.nn.Linear,)) +# Train your network normally (or pass data through it) +... +# Activations of all neurons of first layer! +print(recorder[1]) # You can also post-process this data easily with apply +``` + +For other examples (and how to use condition), see [documentation](https://szymonmaszke.github.io/torchfunc/) + +# :wrench: Installation + +## :snake: [pip](<https://pypi.org/project/torchfunc/>) + +### Latest release: + +```shell +pip install --user torchfunc +``` + +### Nightly: + +```shell +pip install --user torchfunc-nightly +``` + +## :whale2: [Docker](https://hub.docker.com/r/szymonmaszke/torchfunc) + +__CPU standalone__ and various versions of __GPU enabled__ images are available +at [dockerhub](https://hub.docker.com/r/szymonmaszke/torchfunc/tags). + +For CPU quickstart, issue: + +```shell +docker pull szymonmaszke/torchfunc:18.04 +``` + +Nightly builds are also available, just prefix tag with `nightly_`. If you are going for `GPU` image make sure you have +[nvidia/docker](https://github.com/NVIDIA/nvidia-docker) installed and it's runtime set. + +# :question: Contributing + +If you find any issue or you think some functionality may be useful to others and fits this library, please [open new Issue](https://help.github.com/en/articles/creating-an-issue) or [create Pull Request](https://help.github.com/en/articles/creating-a-pull-request-from-a-fork). + +To get an overview of things one can do to help this project, see [Roadmap](https://github.com/szymonmaszke/torchfunc/blob/master/ROADMAP.md). + + + + +%package -n python3-torchfunc-nightly +Summary: PyTorch functions to improve performance, analyse models and make your life easier. +Provides: python-torchfunc-nightly +BuildRequires: python3-devel +BuildRequires: python3-setuptools +BuildRequires: python3-pip +%description -n python3-torchfunc-nightly +<img align="left" width="256" height="256" src="https://github.com/szymonmaszke/torchfunc/blob/master/assets/logos/medium.png"> + +* Improve and analyse performance of your neural network (e.g. Tensor Cores compatibility) +* Record/analyse internal state of `torch.nn.Module` as data passes through it +* Do the above based on external conditions (using single `Callable` to specify it) +* Day-to-day neural network related duties (model size, seeding, time measurements etc.) +* Get information about your host operating system, `torch.nn.Module` device, CUDA +capabilities etc. + + +| Version | Docs | Tests | Coverage | Style | PyPI | Python | PyTorch | Docker | Roadmap | +|---------|------|-------|----------|-------|------|--------|---------|--------|---------| +| [](https://github.com/szymonmaszke/torchfunc/releases) | [](https://szymonmaszke.github.io/torchfunc/) |  |  | [](https://codebeat.co/projects/github-com-szymonmaszke-torchfunc-master) | [](https://pypi.org/project/torchfunc/) | [](https://www.python.org/) | [](https://pytorch.org/) | [](https://hub.docker.com/r/szymonmaszke/torchfunc) | [](https://github.com/szymonmaszke/torchfunc/blob/master/ROADMAP.md) | + +# :bulb: Examples + +__Check documentation here:__ [https://szymonmaszke.github.io/torchfunc](https://szymonmaszke.github.io/torchfunc) + +## 1. Getting performance tips + +- __Get instant performance tips about your module. All problems described by comments +will be shown by `torchfunc.performance.tips`:__ + +```python +class Model(torch.nn.Module): + def __init__(self): + super().__init__() + self.convolution = torch.nn.Sequential( + torch.nn.Conv2d(1, 32, 3), + torch.nn.ReLU(inplace=True), # Inplace may harm kernel fusion + torch.nn.Conv2d(32, 128, 3, groups=32), # Depthwise is slower in PyTorch + torch.nn.ReLU(inplace=True), # Same as before + torch.nn.Conv2d(128, 250, 3), # Wrong output size for TensorCores + ) + + self.classifier = torch.nn.Sequential( + torch.nn.Linear(250, 64), # Wrong input size for TensorCores + torch.nn.ReLU(), # Fine, no info about this layer + torch.nn.Linear(64, 10), # Wrong output size for TensorCores + ) + + def forward(self, inputs): + convolved = torch.nn.AdaptiveAvgPool2d(1)(self.convolution(inputs)).flatten() + return self.classifier(convolved) + +# All you have to do +print(torchfunc.performance.tips(Model())) +``` + +## 2. Seeding, weight freezing and others + +- __Seed globaly (including `numpy` and `cuda`), freeze weights, check inference time and model size:__ + +```python +# Inb4 MNIST, you can use any module with those functions +model = torch.nn.Linear(784, 10) +torchfunc.seed(0) +frozen = torchfunc.module.freeze(model, bias=False) + +with torchfunc.Timer() as timer: + frozen(torch.randn(32, 784) + print(timer.checkpoint()) # Time since the beginning + frozen(torch.randn(128, 784) + print(timer.checkpoint()) # Since last checkpoint + +print(f"Overall time {timer}; Model size: {torchfunc.sizeof(frozen)}") +``` + +## 3. Record `torch.nn.Module` internal state + +- __Record and sum per-layer activation statistics as data passes through network:__ + +```python +# Still MNIST but any module can be put in it's place +model = torch.nn.Sequential( + torch.nn.Linear(784, 100), + torch.nn.ReLU(), + torch.nn.Linear(100, 50), + torch.nn.ReLU(), + torch.nn.Linear(50, 10), +) +# Recorder which sums all inputs to layers +recorder = torchfunc.hooks.recorders.ForwardPre(reduction=lambda x, y: x+y) +# Record only for torch.nn.Linear +recorder.children(model, types=(torch.nn.Linear,)) +# Train your network normally (or pass data through it) +... +# Activations of all neurons of first layer! +print(recorder[1]) # You can also post-process this data easily with apply +``` + +For other examples (and how to use condition), see [documentation](https://szymonmaszke.github.io/torchfunc/) + +# :wrench: Installation + +## :snake: [pip](<https://pypi.org/project/torchfunc/>) + +### Latest release: + +```shell +pip install --user torchfunc +``` + +### Nightly: + +```shell +pip install --user torchfunc-nightly +``` + +## :whale2: [Docker](https://hub.docker.com/r/szymonmaszke/torchfunc) + +__CPU standalone__ and various versions of __GPU enabled__ images are available +at [dockerhub](https://hub.docker.com/r/szymonmaszke/torchfunc/tags). + +For CPU quickstart, issue: + +```shell +docker pull szymonmaszke/torchfunc:18.04 +``` + +Nightly builds are also available, just prefix tag with `nightly_`. If you are going for `GPU` image make sure you have +[nvidia/docker](https://github.com/NVIDIA/nvidia-docker) installed and it's runtime set. + +# :question: Contributing + +If you find any issue or you think some functionality may be useful to others and fits this library, please [open new Issue](https://help.github.com/en/articles/creating-an-issue) or [create Pull Request](https://help.github.com/en/articles/creating-a-pull-request-from-a-fork). + +To get an overview of things one can do to help this project, see [Roadmap](https://github.com/szymonmaszke/torchfunc/blob/master/ROADMAP.md). + + + + +%package help +Summary: Development documents and examples for torchfunc-nightly +Provides: python3-torchfunc-nightly-doc +%description help +<img align="left" width="256" height="256" src="https://github.com/szymonmaszke/torchfunc/blob/master/assets/logos/medium.png"> + +* Improve and analyse performance of your neural network (e.g. Tensor Cores compatibility) +* Record/analyse internal state of `torch.nn.Module` as data passes through it +* Do the above based on external conditions (using single `Callable` to specify it) +* Day-to-day neural network related duties (model size, seeding, time measurements etc.) +* Get information about your host operating system, `torch.nn.Module` device, CUDA +capabilities etc. + + +| Version | Docs | Tests | Coverage | Style | PyPI | Python | PyTorch | Docker | Roadmap | +|---------|------|-------|----------|-------|------|--------|---------|--------|---------| +| [](https://github.com/szymonmaszke/torchfunc/releases) | [](https://szymonmaszke.github.io/torchfunc/) |  |  | [](https://codebeat.co/projects/github-com-szymonmaszke-torchfunc-master) | [](https://pypi.org/project/torchfunc/) | [](https://www.python.org/) | [](https://pytorch.org/) | [](https://hub.docker.com/r/szymonmaszke/torchfunc) | [](https://github.com/szymonmaszke/torchfunc/blob/master/ROADMAP.md) | + +# :bulb: Examples + +__Check documentation here:__ [https://szymonmaszke.github.io/torchfunc](https://szymonmaszke.github.io/torchfunc) + +## 1. Getting performance tips + +- __Get instant performance tips about your module. All problems described by comments +will be shown by `torchfunc.performance.tips`:__ + +```python +class Model(torch.nn.Module): + def __init__(self): + super().__init__() + self.convolution = torch.nn.Sequential( + torch.nn.Conv2d(1, 32, 3), + torch.nn.ReLU(inplace=True), # Inplace may harm kernel fusion + torch.nn.Conv2d(32, 128, 3, groups=32), # Depthwise is slower in PyTorch + torch.nn.ReLU(inplace=True), # Same as before + torch.nn.Conv2d(128, 250, 3), # Wrong output size for TensorCores + ) + + self.classifier = torch.nn.Sequential( + torch.nn.Linear(250, 64), # Wrong input size for TensorCores + torch.nn.ReLU(), # Fine, no info about this layer + torch.nn.Linear(64, 10), # Wrong output size for TensorCores + ) + + def forward(self, inputs): + convolved = torch.nn.AdaptiveAvgPool2d(1)(self.convolution(inputs)).flatten() + return self.classifier(convolved) + +# All you have to do +print(torchfunc.performance.tips(Model())) +``` + +## 2. Seeding, weight freezing and others + +- __Seed globaly (including `numpy` and `cuda`), freeze weights, check inference time and model size:__ + +```python +# Inb4 MNIST, you can use any module with those functions +model = torch.nn.Linear(784, 10) +torchfunc.seed(0) +frozen = torchfunc.module.freeze(model, bias=False) + +with torchfunc.Timer() as timer: + frozen(torch.randn(32, 784) + print(timer.checkpoint()) # Time since the beginning + frozen(torch.randn(128, 784) + print(timer.checkpoint()) # Since last checkpoint + +print(f"Overall time {timer}; Model size: {torchfunc.sizeof(frozen)}") +``` + +## 3. Record `torch.nn.Module` internal state + +- __Record and sum per-layer activation statistics as data passes through network:__ + +```python +# Still MNIST but any module can be put in it's place +model = torch.nn.Sequential( + torch.nn.Linear(784, 100), + torch.nn.ReLU(), + torch.nn.Linear(100, 50), + torch.nn.ReLU(), + torch.nn.Linear(50, 10), +) +# Recorder which sums all inputs to layers +recorder = torchfunc.hooks.recorders.ForwardPre(reduction=lambda x, y: x+y) +# Record only for torch.nn.Linear +recorder.children(model, types=(torch.nn.Linear,)) +# Train your network normally (or pass data through it) +... +# Activations of all neurons of first layer! +print(recorder[1]) # You can also post-process this data easily with apply +``` + +For other examples (and how to use condition), see [documentation](https://szymonmaszke.github.io/torchfunc/) + +# :wrench: Installation + +## :snake: [pip](<https://pypi.org/project/torchfunc/>) + +### Latest release: + +```shell +pip install --user torchfunc +``` + +### Nightly: + +```shell +pip install --user torchfunc-nightly +``` + +## :whale2: [Docker](https://hub.docker.com/r/szymonmaszke/torchfunc) + +__CPU standalone__ and various versions of __GPU enabled__ images are available +at [dockerhub](https://hub.docker.com/r/szymonmaszke/torchfunc/tags). + +For CPU quickstart, issue: + +```shell +docker pull szymonmaszke/torchfunc:18.04 +``` + +Nightly builds are also available, just prefix tag with `nightly_`. If you are going for `GPU` image make sure you have +[nvidia/docker](https://github.com/NVIDIA/nvidia-docker) installed and it's runtime set. + +# :question: Contributing + +If you find any issue or you think some functionality may be useful to others and fits this library, please [open new Issue](https://help.github.com/en/articles/creating-an-issue) or [create Pull Request](https://help.github.com/en/articles/creating-a-pull-request-from-a-fork). + +To get an overview of things one can do to help this project, see [Roadmap](https://github.com/szymonmaszke/torchfunc/blob/master/ROADMAP.md). + + + + +%prep +%autosetup -n torchfunc-nightly-1663034600 + +%build +%py3_build + +%install +%py3_install +install -d -m755 %{buildroot}/%{_pkgdocdir} +if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi +if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi +if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi +if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi +pushd %{buildroot} +if [ -d usr/lib ]; then + find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/lib64 ]; then + find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/bin ]; then + find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/sbin ]; then + find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst +fi +touch doclist.lst +if [ -d usr/share/man ]; then + find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst +fi +popd +mv %{buildroot}/filelist.lst . +mv %{buildroot}/doclist.lst . + +%files -n python3-torchfunc-nightly -f filelist.lst +%dir %{python3_sitelib}/* + +%files help -f doclist.lst +%{_docdir}/* + +%changelog +* Tue Apr 11 2023 Python_Bot <Python_Bot@openeuler.org> - 1663034600-1 +- Package Spec generated |
