%global _empty_manifest_terminate_build 0 Name: python-cuvec Version: 2.12.0 Release: 1 Summary: Unifying Python/C++/CUDA memory: Python buffered array -> C++11 `std::vector` -> CUDA managed memory License: MPL-2.0 URL: https://pypi.org/project/cuvec/ Source0: https://mirrors.aliyun.com/pypi/web/packages/c2/a6/8e3f208d8357f908add52e31f14bd43e2bda6cc7b03c1a50d148b31516ba/cuvec-2.12.0.tar.gz BuildArch: noarch %description Unifying Python/C++/CUDA memory: Python buffered array ↔ C++11 ``std::vector`` ↔ CUDA managed memory. |Version| |Downloads| |Py-Versions| |DOI| |Licence| |Tests| |Coverage| Why ~~~ Data should be manipulated using the existing functionality and design paradigms of each programming language. Python code should be Pythonic. CUDA code should be... CUDActic? C code should be... er, Clean. However, in practice converting between data formats across languages can be a pain. Other libraries which expose functionality to convert/pass data formats between these different language spaces tend to be bloated, unnecessarily complex, and relatively unmaintainable. By comparison, ``cuvec`` uses the latest functionality of Python, C/C++11, and CUDA to keep its code (and yours) as succinct as possible. "Native" containers are exposed so your code follows the conventions of your language. Want something which works like a ``numpy.ndarray``? Not a problem. Want to convert it to a ``std::vector``? Or perhaps a raw ``float *`` to use in a CUDA kernel? Trivial. - Less boilerplate code (fewer bugs, easier debugging, and faster prototyping) - Fewer memory copies (faster execution) - Lower memory usage (do more with less hardware) %package -n python3-cuvec Summary: Unifying Python/C++/CUDA memory: Python buffered array -> C++11 `std::vector` -> CUDA managed memory Provides: python-cuvec BuildRequires: python3-devel BuildRequires: python3-setuptools BuildRequires: python3-pip %description -n python3-cuvec Unifying Python/C++/CUDA memory: Python buffered array ↔ C++11 ``std::vector`` ↔ CUDA managed memory. |Version| |Downloads| |Py-Versions| |DOI| |Licence| |Tests| |Coverage| Why ~~~ Data should be manipulated using the existing functionality and design paradigms of each programming language. Python code should be Pythonic. CUDA code should be... CUDActic? C code should be... er, Clean. However, in practice converting between data formats across languages can be a pain. Other libraries which expose functionality to convert/pass data formats between these different language spaces tend to be bloated, unnecessarily complex, and relatively unmaintainable. By comparison, ``cuvec`` uses the latest functionality of Python, C/C++11, and CUDA to keep its code (and yours) as succinct as possible. "Native" containers are exposed so your code follows the conventions of your language. Want something which works like a ``numpy.ndarray``? Not a problem. Want to convert it to a ``std::vector``? Or perhaps a raw ``float *`` to use in a CUDA kernel? Trivial. - Less boilerplate code (fewer bugs, easier debugging, and faster prototyping) - Fewer memory copies (faster execution) - Lower memory usage (do more with less hardware) %package help Summary: Development documents and examples for cuvec Provides: python3-cuvec-doc %description help Unifying Python/C++/CUDA memory: Python buffered array ↔ C++11 ``std::vector`` ↔ CUDA managed memory. |Version| |Downloads| |Py-Versions| |DOI| |Licence| |Tests| |Coverage| Why ~~~ Data should be manipulated using the existing functionality and design paradigms of each programming language. Python code should be Pythonic. CUDA code should be... CUDActic? C code should be... er, Clean. However, in practice converting between data formats across languages can be a pain. Other libraries which expose functionality to convert/pass data formats between these different language spaces tend to be bloated, unnecessarily complex, and relatively unmaintainable. By comparison, ``cuvec`` uses the latest functionality of Python, C/C++11, and CUDA to keep its code (and yours) as succinct as possible. "Native" containers are exposed so your code follows the conventions of your language. Want something which works like a ``numpy.ndarray``? Not a problem. Want to convert it to a ``std::vector``? Or perhaps a raw ``float *`` to use in a CUDA kernel? Trivial. - Less boilerplate code (fewer bugs, easier debugging, and faster prototyping) - Fewer memory copies (faster execution) - Lower memory usage (do more with less hardware) %prep %autosetup -n cuvec-2.12.0 %build %py3_build %install %py3_install install -d -m755 %{buildroot}/%{_pkgdocdir} if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi pushd %{buildroot} if [ -d usr/lib ]; then find usr/lib -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi if [ -d usr/lib64 ]; then find usr/lib64 -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi if [ -d usr/bin ]; then find usr/bin -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi if [ -d usr/sbin ]; then find usr/sbin -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi touch doclist.lst if [ -d usr/share/man ]; then find usr/share/man -type f -printf "\"/%h/%f.gz\"\n" >> doclist.lst fi popd mv %{buildroot}/filelist.lst . mv %{buildroot}/doclist.lst . %files -n python3-cuvec -f filelist.lst %dir %{python3_sitelib}/* %files help -f doclist.lst %{_docdir}/* %changelog * Tue Jun 20 2023 Python_Bot - 2.12.0-1 - Package Spec generated