%global _empty_manifest_terminate_build 0 Name: python-dispy Version: 4.15.2 Release: 1 Summary: Distributed and Parallel Computing with/for Python. License: Apache 2.0 URL: https://dispy.org Source0: https://mirrors.aliyun.com/pypi/web/packages/59/88/b2bd984a81db9ba0d73a47645ded9da8e0bcfddc231644f9017668aeabcc/dispy-4.15.2.tar.gz BuildArch: noarch %description * dispy is implemented with `pycos `_, an independent framework for asynchronous, concurrent, distributed, network programming with tasks (without threads). pycos uses non-blocking sockets with I/O notification mechanisms epoll, kqueue and poll, and Windows I/O Completion Ports (IOCP) for high performance and scalability, so dispy works efficiently with a single node or large cluster(s) of nodes. pycos itself has support for distributed/parallel computing, including transferring computations, files etc., and message passing (for communicating with client and other computation tasks). While dispy can be used to schedule jobs of a computation to get the results, pycos can be used to create `distributed communicating processes `_, for broad range of use cases. * Computations (Python functions or standalone programs) and their dependencies (files, Python functions, classes, modules) are distributed automatically. * Computation nodes can be anywhere on the network (local or remote). For security, either simple hash based authentication or SSL encryption can be used. * After each execution is finished, the results of execution, output, errors and exception trace are made available for further processing. * Nodes may become available dynamically: dispy will schedule jobs whenever a node is available and computations can use that node. * If callback function is provided, dispy executes that function when a job is finished; this can be used for processing job results as they become available. * Client-side and server-side fault recovery are supported: If user program (client) terminates unexpectedly (e.g., due to uncaught exception), the nodes continue to execute scheduled jobs. If client-side fault recover option is used when creating a cluster, the results of the scheduled (but unfinished at the time of crash) jobs for that cluster can be retrieved later. If a computation is marked reentrant when a cluster is created and a node (server) executing jobs for that computation fails, dispy automatically resubmits those jobs to other available nodes. * dispy can be used in a single process to use all the nodes exclusively (with ``JobCluster`` - simpler to use) or in multiple processes simultaneously sharing the nodes (with ``SharedJobCluster`` and *dispyscheduler* program). * Cluster can be `monitored and managed `_ with web browser. %package -n python3-dispy Summary: Distributed and Parallel Computing with/for Python. Provides: python-dispy BuildRequires: python3-devel BuildRequires: python3-setuptools BuildRequires: python3-pip %description -n python3-dispy * dispy is implemented with `pycos `_, an independent framework for asynchronous, concurrent, distributed, network programming with tasks (without threads). pycos uses non-blocking sockets with I/O notification mechanisms epoll, kqueue and poll, and Windows I/O Completion Ports (IOCP) for high performance and scalability, so dispy works efficiently with a single node or large cluster(s) of nodes. pycos itself has support for distributed/parallel computing, including transferring computations, files etc., and message passing (for communicating with client and other computation tasks). While dispy can be used to schedule jobs of a computation to get the results, pycos can be used to create `distributed communicating processes `_, for broad range of use cases. * Computations (Python functions or standalone programs) and their dependencies (files, Python functions, classes, modules) are distributed automatically. * Computation nodes can be anywhere on the network (local or remote). For security, either simple hash based authentication or SSL encryption can be used. * After each execution is finished, the results of execution, output, errors and exception trace are made available for further processing. * Nodes may become available dynamically: dispy will schedule jobs whenever a node is available and computations can use that node. * If callback function is provided, dispy executes that function when a job is finished; this can be used for processing job results as they become available. * Client-side and server-side fault recovery are supported: If user program (client) terminates unexpectedly (e.g., due to uncaught exception), the nodes continue to execute scheduled jobs. If client-side fault recover option is used when creating a cluster, the results of the scheduled (but unfinished at the time of crash) jobs for that cluster can be retrieved later. If a computation is marked reentrant when a cluster is created and a node (server) executing jobs for that computation fails, dispy automatically resubmits those jobs to other available nodes. * dispy can be used in a single process to use all the nodes exclusively (with ``JobCluster`` - simpler to use) or in multiple processes simultaneously sharing the nodes (with ``SharedJobCluster`` and *dispyscheduler* program). * Cluster can be `monitored and managed `_ with web browser. %package help Summary: Development documents and examples for dispy Provides: python3-dispy-doc %description help * dispy is implemented with `pycos `_, an independent framework for asynchronous, concurrent, distributed, network programming with tasks (without threads). pycos uses non-blocking sockets with I/O notification mechanisms epoll, kqueue and poll, and Windows I/O Completion Ports (IOCP) for high performance and scalability, so dispy works efficiently with a single node or large cluster(s) of nodes. pycos itself has support for distributed/parallel computing, including transferring computations, files etc., and message passing (for communicating with client and other computation tasks). While dispy can be used to schedule jobs of a computation to get the results, pycos can be used to create `distributed communicating processes `_, for broad range of use cases. * Computations (Python functions or standalone programs) and their dependencies (files, Python functions, classes, modules) are distributed automatically. * Computation nodes can be anywhere on the network (local or remote). For security, either simple hash based authentication or SSL encryption can be used. * After each execution is finished, the results of execution, output, errors and exception trace are made available for further processing. * Nodes may become available dynamically: dispy will schedule jobs whenever a node is available and computations can use that node. * If callback function is provided, dispy executes that function when a job is finished; this can be used for processing job results as they become available. * Client-side and server-side fault recovery are supported: If user program (client) terminates unexpectedly (e.g., due to uncaught exception), the nodes continue to execute scheduled jobs. If client-side fault recover option is used when creating a cluster, the results of the scheduled (but unfinished at the time of crash) jobs for that cluster can be retrieved later. If a computation is marked reentrant when a cluster is created and a node (server) executing jobs for that computation fails, dispy automatically resubmits those jobs to other available nodes. * dispy can be used in a single process to use all the nodes exclusively (with ``JobCluster`` - simpler to use) or in multiple processes simultaneously sharing the nodes (with ``SharedJobCluster`` and *dispyscheduler* program). * Cluster can be `monitored and managed `_ with web browser. %prep %autosetup -n dispy-4.15.2 %build %py3_build %install %py3_install install -d -m755 %{buildroot}/%{_pkgdocdir} if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi pushd %{buildroot} if [ -d usr/lib ]; then find usr/lib -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi if [ -d usr/lib64 ]; then find usr/lib64 -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi if [ -d usr/bin ]; then find usr/bin -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi if [ -d usr/sbin ]; then find usr/sbin -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi touch doclist.lst if [ -d usr/share/man ]; then find usr/share/man -type f -printf "\"/%h/%f.gz\"\n" >> doclist.lst fi popd mv %{buildroot}/filelist.lst . mv %{buildroot}/doclist.lst . %files -n python3-dispy -f filelist.lst %dir %{python3_sitelib}/* %files help -f doclist.lst %{_docdir}/* %changelog * Thu Jun 08 2023 Python_Bot - 4.15.2-1 - Package Spec generated