%global _empty_manifest_terminate_build 0 Name: python-pytorch-memlab Version: 0.2.4 Release: 1 Summary: A lab to do simple and accurate memory experiments on pytorch License: MIT URL: https://github.com/Stonesjtu/pytorch_memlab Source0: https://mirrors.nju.edu.cn/pypi/web/packages/ca/a9/0554fae8883b2a646720e0e1d84d1bdef57e90ee3c244686f589052c3e0a/pytorch_memlab-0.2.4.tar.gz BuildArch: noarch %description [![Build Status](https://travis-ci.com/Stonesjtu/pytorch_memlab.svg?token=vyTdxHbi1PCRzV6disHp&branch=master)](https://travis-ci.com/Stonesjtu/pytorch_memlab) ![PyPI](https://img.shields.io/pypi/v/pytorch_memlab.svg) [![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/Stonesjtu/pytorch_memlab.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/Stonesjtu/pytorch_memlab/context:python) ![PyPI - Downloads](https://img.shields.io/pypi/dm/pytorch_memlab.svg) A simple and accurate **CUDA** memory management laboratory for pytorch, it consists of different parts about the memory: - Features: - Memory Profiler: A `line_profiler` style CUDA memory profiler with simple API. - Memory Reporter: A reporter to inspect tensors occupying the CUDA memory. - Courtesy: An interesting feature to temporarily move all the CUDA tensors into CPU memory for courtesy, and of course the backward transferring. - IPython support through `%mlrun`/`%%mlrun` line/cell magic commands. - Table of Contents * [Installation](#installation) * [User-Doc](#user-doc) + [Memory Profiler](#memory-profiler) + [IPython support](#ipython-support) + [Memory Reporter](#memory-reporter) + [Courtesy](#courtesy) + [ACK](#ack) * [CHANGES](#changes) %package -n python3-pytorch-memlab Summary: A lab to do simple and accurate memory experiments on pytorch Provides: python-pytorch-memlab BuildRequires: python3-devel BuildRequires: python3-setuptools BuildRequires: python3-pip %description -n python3-pytorch-memlab [![Build Status](https://travis-ci.com/Stonesjtu/pytorch_memlab.svg?token=vyTdxHbi1PCRzV6disHp&branch=master)](https://travis-ci.com/Stonesjtu/pytorch_memlab) ![PyPI](https://img.shields.io/pypi/v/pytorch_memlab.svg) [![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/Stonesjtu/pytorch_memlab.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/Stonesjtu/pytorch_memlab/context:python) ![PyPI - Downloads](https://img.shields.io/pypi/dm/pytorch_memlab.svg) A simple and accurate **CUDA** memory management laboratory for pytorch, it consists of different parts about the memory: - Features: - Memory Profiler: A `line_profiler` style CUDA memory profiler with simple API. - Memory Reporter: A reporter to inspect tensors occupying the CUDA memory. - Courtesy: An interesting feature to temporarily move all the CUDA tensors into CPU memory for courtesy, and of course the backward transferring. - IPython support through `%mlrun`/`%%mlrun` line/cell magic commands. - Table of Contents * [Installation](#installation) * [User-Doc](#user-doc) + [Memory Profiler](#memory-profiler) + [IPython support](#ipython-support) + [Memory Reporter](#memory-reporter) + [Courtesy](#courtesy) + [ACK](#ack) * [CHANGES](#changes) %package help Summary: Development documents and examples for pytorch-memlab Provides: python3-pytorch-memlab-doc %description help [![Build Status](https://travis-ci.com/Stonesjtu/pytorch_memlab.svg?token=vyTdxHbi1PCRzV6disHp&branch=master)](https://travis-ci.com/Stonesjtu/pytorch_memlab) ![PyPI](https://img.shields.io/pypi/v/pytorch_memlab.svg) [![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/Stonesjtu/pytorch_memlab.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/Stonesjtu/pytorch_memlab/context:python) ![PyPI - Downloads](https://img.shields.io/pypi/dm/pytorch_memlab.svg) A simple and accurate **CUDA** memory management laboratory for pytorch, it consists of different parts about the memory: - Features: - Memory Profiler: A `line_profiler` style CUDA memory profiler with simple API. - Memory Reporter: A reporter to inspect tensors occupying the CUDA memory. - Courtesy: An interesting feature to temporarily move all the CUDA tensors into CPU memory for courtesy, and of course the backward transferring. - IPython support through `%mlrun`/`%%mlrun` line/cell magic commands. - Table of Contents * [Installation](#installation) * [User-Doc](#user-doc) + [Memory Profiler](#memory-profiler) + [IPython support](#ipython-support) + [Memory Reporter](#memory-reporter) + [Courtesy](#courtesy) + [ACK](#ack) * [CHANGES](#changes) %prep %autosetup -n pytorch-memlab-0.2.4 %build %py3_build %install %py3_install install -d -m755 %{buildroot}/%{_pkgdocdir} if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi pushd %{buildroot} if [ -d usr/lib ]; then find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/lib64 ]; then find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/bin ]; then find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/sbin ]; then find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst fi touch doclist.lst if [ -d usr/share/man ]; then find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst fi popd mv %{buildroot}/filelist.lst . mv %{buildroot}/doclist.lst . %files -n python3-pytorch-memlab -f filelist.lst %dir %{python3_sitelib}/* %files help -f doclist.lst %{_docdir}/* %changelog * Wed Apr 12 2023 Python_Bot - 0.2.4-1 - Package Spec generated