%global _empty_manifest_terminate_build 0
Name: python-atasker
Version: 0.7.9
Release: 1
Summary: Thread and multiprocessing pooling, task processing via asyncio
License: Apache License 2.0
URL: https://github.com/alttch/atasker
Source0: https://mirrors.nju.edu.cn/pypi/web/packages/c6/e5/c82778e0774af33a6dd476f03988c6317017f599320805217bce9d2514ec/atasker-0.7.9.tar.gz
BuildArch: noarch
%description
# atasker
Python library for modern thread / multiprocessing pooling and task processing
via asyncio.
Warning: **atasker** is not suitable for the lightweight tasks in high-load
environments. For such projects it's highly recommended to use lightweight
version: [neotasker](https://github.com/alttch/neotasker)
No matter how your code is written, atasker automatically detects blocking
functions and coroutines and launches them in a proper way, in a thread,
asynchronous loop or in multiprocessing pool.
Tasks are grouped into pools. If there's no space in pool, task is being placed
into waiting queue according to their priority. Pool also has "reserve" for the
tasks with priorities "normal" and higher. Tasks with "critical" priority are
always executed instantly.
This library is useful if you have a project with many similar tasks which
produce approximately equal CPU/memory load, e.g. API responses, scheduled
resource state updates etc.
## Install
```bash
pip3 install atasker
```
Sources: https://github.com/alttch/atasker
Documentation: https://atasker.readthedocs.io/
## Why
* asynchronous programming is a perfect way to make your code fast and reliable
* multithreading programming is a perfect way to run blocking code in the
background
**atasker** combines advantages of both ways: atasker tasks run in separate
threads however task supervisor and workers are completely asynchronous. But
all their public methods are thread-safe.
## Why not standard Python thread pool?
* threads in a standard pool don't have priorities
* workers
## Why not standard asyncio loops?
* compatibility with blocking functions
* async workers
## Why not concurrent.futures?
**concurrent.futures** is a great standard Python library which allows you to
execute specified tasks in a pool of workers.
For thread-based tasks, **atasker** extends
*concurrent.futures.ThreadPoolExecutor* functionality.
**atasker** method *background_task* solves the same problem but in slightly
different way, adding priorities to the tasks, while *atasker* workers do
absolutely different job:
* in *concurrent.futures* worker is a pool member which executes the single
specified task.
* in *atasker* worker is an object, which continuously *generates* new tasks
with the specified interval or on external event, and executes them in thread
or multiprocessing pool.
## Code examples
### Start/stop
```python
from atasker import task_supervisor
# set pool size
task_supervisor.set_thread_pool(pool_size=20, reserve_normal=5, reserve_high=5)
task_supervisor.start()
# ...
# start workers, other threads etc.
# ...
# optionally block current thread
task_supervisor.block()
# stop from any thread
task_supervisor.stop()
```
### Background task
```python
from atasker import background_task, TASK_LOW, TASK_HIGH, wait_completed
# with annotation
@background_task
def mytask():
print('I am working in the background!')
return 777
task = mytask()
# optional
result = wait_completed(task)
print(task.result) # 777
print(result) # 777
# with manual decoration
def mytask2():
print('I am working in the background too!')
task = background_task(mytask2, priority=TASK_HIGH)()
```
### Async tasks
```python
# new asyncio loop is automatically created in own thread
a1 = task_supervisor.create_aloop('myaloop', default=True)
async def calc(a):
print(a)
await asyncio.sleep(1)
print(a * 2)
return a * 3
# call from sync code
# put coroutine
task = background_task(calc)(1)
wait_completed(task)
# run coroutine and wait for result
result = a1.run(calc(1))
```
### Worker examples
```python
from atasker import background_worker, TASK_HIGH
@background_worker
def worker1(**kwargs):
print('I am a simple background worker')
@background_worker
async def worker_async(**kwargs):
print('I am async background worker')
@background_worker(interval=1)
def worker2(**kwargs):
print('I run every second!')
@background_worker(queue=True)
def worker3(task, **kwargs):
print('I run when there is a task in my queue')
@background_worker(event=True, priority=TASK_HIGH)
def worker4(**kwargs):
print('I run when triggered with high priority')
worker1.start()
worker_async.start()
worker2.start()
worker3.start()
worker4.start()
worker3.put_threadsafe('todo1')
worker4.trigger_threadsafe()
from atasker import BackgroundIntervalWorker
class MyWorker(BackgroundIntervalWorker):
def run(self, **kwargs):
print('I am custom worker class')
worker5 = MyWorker(interval=0.1, name='worker5')
worker5.start()
```
%package -n python3-atasker
Summary: Thread and multiprocessing pooling, task processing via asyncio
Provides: python-atasker
BuildRequires: python3-devel
BuildRequires: python3-setuptools
BuildRequires: python3-pip
%description -n python3-atasker
# atasker
Python library for modern thread / multiprocessing pooling and task processing
via asyncio.
Warning: **atasker** is not suitable for the lightweight tasks in high-load
environments. For such projects it's highly recommended to use lightweight
version: [neotasker](https://github.com/alttch/neotasker)
No matter how your code is written, atasker automatically detects blocking
functions and coroutines and launches them in a proper way, in a thread,
asynchronous loop or in multiprocessing pool.
Tasks are grouped into pools. If there's no space in pool, task is being placed
into waiting queue according to their priority. Pool also has "reserve" for the
tasks with priorities "normal" and higher. Tasks with "critical" priority are
always executed instantly.
This library is useful if you have a project with many similar tasks which
produce approximately equal CPU/memory load, e.g. API responses, scheduled
resource state updates etc.
## Install
```bash
pip3 install atasker
```
Sources: https://github.com/alttch/atasker
Documentation: https://atasker.readthedocs.io/
## Why
* asynchronous programming is a perfect way to make your code fast and reliable
* multithreading programming is a perfect way to run blocking code in the
background
**atasker** combines advantages of both ways: atasker tasks run in separate
threads however task supervisor and workers are completely asynchronous. But
all their public methods are thread-safe.
## Why not standard Python thread pool?
* threads in a standard pool don't have priorities
* workers
## Why not standard asyncio loops?
* compatibility with blocking functions
* async workers
## Why not concurrent.futures?
**concurrent.futures** is a great standard Python library which allows you to
execute specified tasks in a pool of workers.
For thread-based tasks, **atasker** extends
*concurrent.futures.ThreadPoolExecutor* functionality.
**atasker** method *background_task* solves the same problem but in slightly
different way, adding priorities to the tasks, while *atasker* workers do
absolutely different job:
* in *concurrent.futures* worker is a pool member which executes the single
specified task.
* in *atasker* worker is an object, which continuously *generates* new tasks
with the specified interval or on external event, and executes them in thread
or multiprocessing pool.
## Code examples
### Start/stop
```python
from atasker import task_supervisor
# set pool size
task_supervisor.set_thread_pool(pool_size=20, reserve_normal=5, reserve_high=5)
task_supervisor.start()
# ...
# start workers, other threads etc.
# ...
# optionally block current thread
task_supervisor.block()
# stop from any thread
task_supervisor.stop()
```
### Background task
```python
from atasker import background_task, TASK_LOW, TASK_HIGH, wait_completed
# with annotation
@background_task
def mytask():
print('I am working in the background!')
return 777
task = mytask()
# optional
result = wait_completed(task)
print(task.result) # 777
print(result) # 777
# with manual decoration
def mytask2():
print('I am working in the background too!')
task = background_task(mytask2, priority=TASK_HIGH)()
```
### Async tasks
```python
# new asyncio loop is automatically created in own thread
a1 = task_supervisor.create_aloop('myaloop', default=True)
async def calc(a):
print(a)
await asyncio.sleep(1)
print(a * 2)
return a * 3
# call from sync code
# put coroutine
task = background_task(calc)(1)
wait_completed(task)
# run coroutine and wait for result
result = a1.run(calc(1))
```
### Worker examples
```python
from atasker import background_worker, TASK_HIGH
@background_worker
def worker1(**kwargs):
print('I am a simple background worker')
@background_worker
async def worker_async(**kwargs):
print('I am async background worker')
@background_worker(interval=1)
def worker2(**kwargs):
print('I run every second!')
@background_worker(queue=True)
def worker3(task, **kwargs):
print('I run when there is a task in my queue')
@background_worker(event=True, priority=TASK_HIGH)
def worker4(**kwargs):
print('I run when triggered with high priority')
worker1.start()
worker_async.start()
worker2.start()
worker3.start()
worker4.start()
worker3.put_threadsafe('todo1')
worker4.trigger_threadsafe()
from atasker import BackgroundIntervalWorker
class MyWorker(BackgroundIntervalWorker):
def run(self, **kwargs):
print('I am custom worker class')
worker5 = MyWorker(interval=0.1, name='worker5')
worker5.start()
```
%package help
Summary: Development documents and examples for atasker
Provides: python3-atasker-doc
%description help
# atasker
Python library for modern thread / multiprocessing pooling and task processing
via asyncio.
Warning: **atasker** is not suitable for the lightweight tasks in high-load
environments. For such projects it's highly recommended to use lightweight
version: [neotasker](https://github.com/alttch/neotasker)
No matter how your code is written, atasker automatically detects blocking
functions and coroutines and launches them in a proper way, in a thread,
asynchronous loop or in multiprocessing pool.
Tasks are grouped into pools. If there's no space in pool, task is being placed
into waiting queue according to their priority. Pool also has "reserve" for the
tasks with priorities "normal" and higher. Tasks with "critical" priority are
always executed instantly.
This library is useful if you have a project with many similar tasks which
produce approximately equal CPU/memory load, e.g. API responses, scheduled
resource state updates etc.
## Install
```bash
pip3 install atasker
```
Sources: https://github.com/alttch/atasker
Documentation: https://atasker.readthedocs.io/
## Why
* asynchronous programming is a perfect way to make your code fast and reliable
* multithreading programming is a perfect way to run blocking code in the
background
**atasker** combines advantages of both ways: atasker tasks run in separate
threads however task supervisor and workers are completely asynchronous. But
all their public methods are thread-safe.
## Why not standard Python thread pool?
* threads in a standard pool don't have priorities
* workers
## Why not standard asyncio loops?
* compatibility with blocking functions
* async workers
## Why not concurrent.futures?
**concurrent.futures** is a great standard Python library which allows you to
execute specified tasks in a pool of workers.
For thread-based tasks, **atasker** extends
*concurrent.futures.ThreadPoolExecutor* functionality.
**atasker** method *background_task* solves the same problem but in slightly
different way, adding priorities to the tasks, while *atasker* workers do
absolutely different job:
* in *concurrent.futures* worker is a pool member which executes the single
specified task.
* in *atasker* worker is an object, which continuously *generates* new tasks
with the specified interval or on external event, and executes them in thread
or multiprocessing pool.
## Code examples
### Start/stop
```python
from atasker import task_supervisor
# set pool size
task_supervisor.set_thread_pool(pool_size=20, reserve_normal=5, reserve_high=5)
task_supervisor.start()
# ...
# start workers, other threads etc.
# ...
# optionally block current thread
task_supervisor.block()
# stop from any thread
task_supervisor.stop()
```
### Background task
```python
from atasker import background_task, TASK_LOW, TASK_HIGH, wait_completed
# with annotation
@background_task
def mytask():
print('I am working in the background!')
return 777
task = mytask()
# optional
result = wait_completed(task)
print(task.result) # 777
print(result) # 777
# with manual decoration
def mytask2():
print('I am working in the background too!')
task = background_task(mytask2, priority=TASK_HIGH)()
```
### Async tasks
```python
# new asyncio loop is automatically created in own thread
a1 = task_supervisor.create_aloop('myaloop', default=True)
async def calc(a):
print(a)
await asyncio.sleep(1)
print(a * 2)
return a * 3
# call from sync code
# put coroutine
task = background_task(calc)(1)
wait_completed(task)
# run coroutine and wait for result
result = a1.run(calc(1))
```
### Worker examples
```python
from atasker import background_worker, TASK_HIGH
@background_worker
def worker1(**kwargs):
print('I am a simple background worker')
@background_worker
async def worker_async(**kwargs):
print('I am async background worker')
@background_worker(interval=1)
def worker2(**kwargs):
print('I run every second!')
@background_worker(queue=True)
def worker3(task, **kwargs):
print('I run when there is a task in my queue')
@background_worker(event=True, priority=TASK_HIGH)
def worker4(**kwargs):
print('I run when triggered with high priority')
worker1.start()
worker_async.start()
worker2.start()
worker3.start()
worker4.start()
worker3.put_threadsafe('todo1')
worker4.trigger_threadsafe()
from atasker import BackgroundIntervalWorker
class MyWorker(BackgroundIntervalWorker):
def run(self, **kwargs):
print('I am custom worker class')
worker5 = MyWorker(interval=0.1, name='worker5')
worker5.start()
```
%prep
%autosetup -n atasker-0.7.9
%build
%py3_build
%install
%py3_install
install -d -m755 %{buildroot}/%{_pkgdocdir}
if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
pushd %{buildroot}
if [ -d usr/lib ]; then
find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/lib64 ]; then
find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/bin ]; then
find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/sbin ]; then
find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
fi
touch doclist.lst
if [ -d usr/share/man ]; then
find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
fi
popd
mv %{buildroot}/filelist.lst .
mv %{buildroot}/doclist.lst .
%files -n python3-atasker -f filelist.lst
%dir %{python3_sitelib}/*
%files help -f doclist.lst
%{_docdir}/*
%changelog
* Tue May 30 2023 Python_Bot - 0.7.9-1
- Package Spec generated