diff options
Diffstat (limited to 'python-bounded-pool-executor.spec')
| -rw-r--r-- | python-bounded-pool-executor.spec | 258 |
1 files changed, 258 insertions, 0 deletions
diff --git a/python-bounded-pool-executor.spec b/python-bounded-pool-executor.spec new file mode 100644 index 0000000..755d042 --- /dev/null +++ b/python-bounded-pool-executor.spec @@ -0,0 +1,258 @@ +%global _empty_manifest_terminate_build 0 +Name: python-bounded-pool-executor +Version: 0.0.3 +Release: 1 +Summary: Bounded Process&Thread Pool Executor +License: MIT +URL: http://github.com/mowshon/bounded_pool_executor +Source0: https://mirrors.nju.edu.cn/pypi/web/packages/23/f1/e34501c1228415e9fbcac8cb9c81098900e78331b30eeee1816176324bab/bounded_pool_executor-0.0.3.tar.gz +BuildArch: noarch + + +%description +# Bounded Process&Thread Pool Executor +BoundedSemaphore for [ProcessPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor) & [ThreadPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor) from [concurrent.futures](https://docs.python.org/3/library/concurrent.futures.html) + +## Installation +```bash +pip install bounded-pool-executor +``` + +# What is the main problem? +If you use the standard module "**concurrent.futures**" and want to simultaneously process several million data, then a queue of workers will take up all the free memory. + +If the script is run on a weak VPS, this will lead to a **memory leak**. + + + +## BoundedProcessPoolExecutor VS ProcessPoolExecutor + +# BoundedProcessPoolExecutor +**BoundedProcessPoolExecutor** will put a new worker in queue only when another worker has finished his work. + +```python +from bounded_pool_executor import BoundedProcessPoolExecutor +from time import sleep +from random import randint + +def do_job(num): + sleep_sec = randint(1, 10) + print('value: %d, sleep: %d sec.' % (num, sleep_sec)) + sleep(sleep_sec) + +with BoundedProcessPoolExecutor(max_workers=5) as worker: + for num in range(10000): + print('#%d Worker initialization' % num) + worker.submit(do_job, num) + +``` +### Result: + + +# Classic concurrent.futures.ProcessPoolExecutor +**ProcessPoolExecutor** inserts all workers into the queue and expects tasks to be performed as the new worker is released, depending on the value of `max_workers`. + +```python +import concurrent.futures +from time import sleep +from random import randint + +def do_job(num): + sleep_sec = randint(1, 3) + print('value: %d, sleep: %d sec.' % (num, sleep_sec)) + sleep(sleep_sec) + +with concurrent.futures.ProcessPoolExecutor(max_workers=5) as worker: + for num in range(100000): + print('#%d Worker initialization' % num) + worker.submit(do_job, num) +``` + +### Result: + + + + + +%package -n python3-bounded-pool-executor +Summary: Bounded Process&Thread Pool Executor +Provides: python-bounded-pool-executor +BuildRequires: python3-devel +BuildRequires: python3-setuptools +BuildRequires: python3-pip +%description -n python3-bounded-pool-executor +# Bounded Process&Thread Pool Executor +BoundedSemaphore for [ProcessPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor) & [ThreadPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor) from [concurrent.futures](https://docs.python.org/3/library/concurrent.futures.html) + +## Installation +```bash +pip install bounded-pool-executor +``` + +# What is the main problem? +If you use the standard module "**concurrent.futures**" and want to simultaneously process several million data, then a queue of workers will take up all the free memory. + +If the script is run on a weak VPS, this will lead to a **memory leak**. + + + +## BoundedProcessPoolExecutor VS ProcessPoolExecutor + +# BoundedProcessPoolExecutor +**BoundedProcessPoolExecutor** will put a new worker in queue only when another worker has finished his work. + +```python +from bounded_pool_executor import BoundedProcessPoolExecutor +from time import sleep +from random import randint + +def do_job(num): + sleep_sec = randint(1, 10) + print('value: %d, sleep: %d sec.' % (num, sleep_sec)) + sleep(sleep_sec) + +with BoundedProcessPoolExecutor(max_workers=5) as worker: + for num in range(10000): + print('#%d Worker initialization' % num) + worker.submit(do_job, num) + +``` +### Result: + + +# Classic concurrent.futures.ProcessPoolExecutor +**ProcessPoolExecutor** inserts all workers into the queue and expects tasks to be performed as the new worker is released, depending on the value of `max_workers`. + +```python +import concurrent.futures +from time import sleep +from random import randint + +def do_job(num): + sleep_sec = randint(1, 3) + print('value: %d, sleep: %d sec.' % (num, sleep_sec)) + sleep(sleep_sec) + +with concurrent.futures.ProcessPoolExecutor(max_workers=5) as worker: + for num in range(100000): + print('#%d Worker initialization' % num) + worker.submit(do_job, num) +``` + +### Result: + + + + + +%package help +Summary: Development documents and examples for bounded-pool-executor +Provides: python3-bounded-pool-executor-doc +%description help +# Bounded Process&Thread Pool Executor +BoundedSemaphore for [ProcessPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor) & [ThreadPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor) from [concurrent.futures](https://docs.python.org/3/library/concurrent.futures.html) + +## Installation +```bash +pip install bounded-pool-executor +``` + +# What is the main problem? +If you use the standard module "**concurrent.futures**" and want to simultaneously process several million data, then a queue of workers will take up all the free memory. + +If the script is run on a weak VPS, this will lead to a **memory leak**. + + + +## BoundedProcessPoolExecutor VS ProcessPoolExecutor + +# BoundedProcessPoolExecutor +**BoundedProcessPoolExecutor** will put a new worker in queue only when another worker has finished his work. + +```python +from bounded_pool_executor import BoundedProcessPoolExecutor +from time import sleep +from random import randint + +def do_job(num): + sleep_sec = randint(1, 10) + print('value: %d, sleep: %d sec.' % (num, sleep_sec)) + sleep(sleep_sec) + +with BoundedProcessPoolExecutor(max_workers=5) as worker: + for num in range(10000): + print('#%d Worker initialization' % num) + worker.submit(do_job, num) + +``` +### Result: + + +# Classic concurrent.futures.ProcessPoolExecutor +**ProcessPoolExecutor** inserts all workers into the queue and expects tasks to be performed as the new worker is released, depending on the value of `max_workers`. + +```python +import concurrent.futures +from time import sleep +from random import randint + +def do_job(num): + sleep_sec = randint(1, 3) + print('value: %d, sleep: %d sec.' % (num, sleep_sec)) + sleep(sleep_sec) + +with concurrent.futures.ProcessPoolExecutor(max_workers=5) as worker: + for num in range(100000): + print('#%d Worker initialization' % num) + worker.submit(do_job, num) +``` + +### Result: + + + + + +%prep +%autosetup -n bounded-pool-executor-0.0.3 + +%build +%py3_build + +%install +%py3_install +install -d -m755 %{buildroot}/%{_pkgdocdir} +if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi +if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi +if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi +if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi +pushd %{buildroot} +if [ -d usr/lib ]; then + find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/lib64 ]; then + find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/bin ]; then + find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/sbin ]; then + find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst +fi +touch doclist.lst +if [ -d usr/share/man ]; then + find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst +fi +popd +mv %{buildroot}/filelist.lst . +mv %{buildroot}/doclist.lst . + +%files -n python3-bounded-pool-executor -f filelist.lst +%dir %{python3_sitelib}/* + +%files help -f doclist.lst +%{_docdir}/* + +%changelog +* Tue Apr 11 2023 Python_Bot <Python_Bot@openeuler.org> - 0.0.3-1 +- Package Spec generated |
