%global _empty_manifest_terminate_build 0 Name: python-simple-slurm Version: 0.2.6 Release: 1 Summary: A simple Python wrapper for Slurm with flexibility in mind. License: GNU Affero General Public License v3 URL: https://github.com/amq92/simple_slurm Source0: https://mirrors.nju.edu.cn/pypi/web/packages/00/65/d8cace7abf6a41ac4ae451736644898e786c90515acf31d23a292d0edc79/simple_slurm-0.2.6.tar.gz BuildArch: noarch %description
A simple Python wrapper for Slurm with flexibility in mind
```python import datetime from simple_slurm import Slurm slurm = Slurm( array=range(3, 12), cpus_per_task=15, dependency=dict(after=65541, afterok=34987), gres=['gpu:kepler:2', 'gpu:tesla:2', 'mps:400'], ignore_pbs=True, job_name='name', output=f'{Slurm.JOB_ARRAY_MASTER_ID}_{Slurm.JOB_ARRAY_ID}.out', time=datetime.timedelta(days=1, hours=2, minutes=3, seconds=4), ) slurm.sbatch('python demo.py ' + Slurm.SLURM_ARRAY_TASK_ID) ``` The above snippet is equivalent to running the following command: ```bash sbatch << EOF #!/bin/sh #SBATCH --array 3-11 #SBATCH --cpus-per-task 15 #SBATCH --dependency after:65541,afterok:34987 #SBATCH --gres gpu:kepler:2,gpu:tesla:2,mps:400 #SBATCH --ignore-pbs #SBATCH --job-name name #SBATCH --output %A_%a.out #SBATCH --time 1-02:03:04 python demo.py \$SLURM_ARRAY_TASK_ID EOF ``` ## Contents + [Introduction](#introduction) + [Installation instructions](#installation-instructions) + [Many syntaxes available](#many-syntaxes-available) - [Using configuration files](#using-configuration-files) - [Using the command line](#using-the-command-line) + [Job dependencies](#job-dependencies) + [Additional features](#additional-features) - [Filename Patterns](#filename-patterns) - [Output Environment Variables](#output-environment-variables) ## Introduction The [`sbatch`](https://slurm.schedmd.com/sbatch.html) and [`srun`](https://slurm.schedmd.com/srun.html) commands in [Slurm](https://slurm.schedmd.com/overview.html) allow submitting parallel jobs into a Linux cluster in the form of batch scripts that follow a certain structure. The goal of this library is to provide a simple wrapper for these functions (`sbatch` and `srun`) so that Python code can be used for constructing and launching the aforementioned batch script. Indeed, the generated batch script can be shown by printing the `Slurm` object: ```python from simple_slurm import Slurm slurm = Slurm(array=range(3, 12), job_name='name') print(slurm) ``` ```bash >> #!/bin/sh >> >> #SBATCH --array 3-11 >> #SBATCH --job-name name ``` Then, the job can be launched with either command: ```python slurm.srun('echo hello!') slurm.sbatch('echo hello!') ``` ```bash >> Submitted batch job 34987 ``` While both commands are quite similar, [`srun`](https://slurm.schedmd.com/srun.html) will wait for the job completion, while [`sbatch`](https://slurm.schedmd.com/sbatch.html) will launch and disconnect from the jobs. > More information can be found in [Slurm's Quick Start Guide](https://slurm.schedmd.com/quickstart.html) and in [here](https://stackoverflow.com/questions/43767866/slurm-srun-vs-sbatch-and-their-parameters). ## Installation instructions From PyPI ```bash pip install simple_slurm ``` From Conda ```bash conda install -c conda-forge simple_slurm ``` From git ```bash pip install git+https://github.com/amq92/simple_slurm.git ``` ## Many syntaxes available ```python slurm = Slurm('-a', '3-11') slurm = Slurm('--array', '3-11') slurm = Slurm('array', '3-11') slurm = Slurm(array='3-11') slurm = Slurm(array=range(3, 12)) slurm.add_arguments(array=range(3, 12)) slurm.set_array(range(3, 12)) ``` All these arguments are equivalent! It's up to you to choose the one(s) that best suits you needs. > *"With great flexibility comes great responsability"* You can either keep a command-line-like syntax or a more Python-like one ```python slurm = Slurm() slurm.set_dependency('after:65541,afterok:34987') slurm.set_dependency(['after:65541', 'afterok:34987']) slurm.set_dependency(dict(after=65541, afterok=34987)) ``` All the possible arguments have their own setter methods (ex. `set_array`, `set_dependency`, `set_job_name`). Please note that hyphenated arguments, such as `--job-name`, need to be underscored (so to comply with Python syntax and be coherent). ```python slurm = Slurm('--job_name', 'name') slurm = Slurm(job_name='name') # slurm = Slurm('--job-name', 'name') # NOT VALID # slurm = Slurm(job-name='name') # NOT VALID ``` Moreover, boolean arguments such as `--contiguous`, `--ignore_pbs` or `--overcommit` can be activated with `True` or an empty string. ```python slurm = Slurm('--contiguous', True) slurm.add_arguments(ignore_pbs='') slurm.set_wait(False) print(slurm) ``` ```bash #!/bin/sh #SBATCH --contiguous #SBATCH --ignore-pbs ``` ### Using configuration files Let's define the *static* components of a job definition in a YAML file `default.slurm` ```yaml cpus_per_task: 15 job_name: 'name' output: '%A_%a.out' ``` Including these options with the using the `yaml` package is very *simple* ```python import yaml from simple_slurm import Slurm slurm = Slurm(**yaml.load(open('default.slurm'))) ... slurm.set_array(range(NUMBER_OF_SIMULATIONS)) ``` The job can be updated according to the *dynamic* project needs (ex. `NUMBER_OF_SIMULATIONS`). ### Using the command line For simpler dispatch jobs, a comand line entry point is also made available. ```bash simple_slurm [OPTIONS] "COMMAND_TO_RUN_WITH_SBATCH" ``` As such, both of these `python` and `bash` calls are equivalent. ```python slurm = Slurm(partition='compute.p', output='slurm.log', ignore_pbs=True) slurm.sbatch('echo \$HOSTNAME') ``` ```bash simple_slurm --partition=compute.p --output slurm.log --ignore_pbs "echo \$HOSTNAME" ``` ## Job dependencies The `sbatch` call prints a message if successful and returns the corresponding `job_id` ```python job_id = slurm.sbatch('python demo.py ' + Slurm.SLURM_ARRAY_TAKSK_ID) ``` If the job submission was successful, it prints: ``` Submitted batch job 34987 ``` And returns the variable `job_id = 34987`, which can be used for setting dependencies on subsequent jobs ```python slurm_after = Slurm(dependency=dict(afterok=job_id))) ``` ## Additional features For convenience, Filename Patterns and Output Environment Variables are available as attributes of the Simple Slurm object. See [https://slurm.schedmd.com/sbatch.html](https://slurm.schedmd.com/sbatch.html#lbAH) for details on the commands. ```python from slurm import Slurm slurm = Slurm(output=('{}_{}.out'.format( Slurm.JOB_ARRAY_MASTER_ID, Slurm.JOB_ARRAY_ID)) slurm.sbatch('python demo.py ' + slurm.SLURM_ARRAY_JOB_ID) ``` This example would result in output files of the form `65541_15.out`. Here the job submission ID is `65541`, and this output file corresponds to the submission number `15` in the job array. Moreover, this index is passed to the Python code `demo.py` as an argument. > Note that they can be accessed either as `Slurm.
A simple Python wrapper for Slurm with flexibility in mind
```python import datetime from simple_slurm import Slurm slurm = Slurm( array=range(3, 12), cpus_per_task=15, dependency=dict(after=65541, afterok=34987), gres=['gpu:kepler:2', 'gpu:tesla:2', 'mps:400'], ignore_pbs=True, job_name='name', output=f'{Slurm.JOB_ARRAY_MASTER_ID}_{Slurm.JOB_ARRAY_ID}.out', time=datetime.timedelta(days=1, hours=2, minutes=3, seconds=4), ) slurm.sbatch('python demo.py ' + Slurm.SLURM_ARRAY_TASK_ID) ``` The above snippet is equivalent to running the following command: ```bash sbatch << EOF #!/bin/sh #SBATCH --array 3-11 #SBATCH --cpus-per-task 15 #SBATCH --dependency after:65541,afterok:34987 #SBATCH --gres gpu:kepler:2,gpu:tesla:2,mps:400 #SBATCH --ignore-pbs #SBATCH --job-name name #SBATCH --output %A_%a.out #SBATCH --time 1-02:03:04 python demo.py \$SLURM_ARRAY_TASK_ID EOF ``` ## Contents + [Introduction](#introduction) + [Installation instructions](#installation-instructions) + [Many syntaxes available](#many-syntaxes-available) - [Using configuration files](#using-configuration-files) - [Using the command line](#using-the-command-line) + [Job dependencies](#job-dependencies) + [Additional features](#additional-features) - [Filename Patterns](#filename-patterns) - [Output Environment Variables](#output-environment-variables) ## Introduction The [`sbatch`](https://slurm.schedmd.com/sbatch.html) and [`srun`](https://slurm.schedmd.com/srun.html) commands in [Slurm](https://slurm.schedmd.com/overview.html) allow submitting parallel jobs into a Linux cluster in the form of batch scripts that follow a certain structure. The goal of this library is to provide a simple wrapper for these functions (`sbatch` and `srun`) so that Python code can be used for constructing and launching the aforementioned batch script. Indeed, the generated batch script can be shown by printing the `Slurm` object: ```python from simple_slurm import Slurm slurm = Slurm(array=range(3, 12), job_name='name') print(slurm) ``` ```bash >> #!/bin/sh >> >> #SBATCH --array 3-11 >> #SBATCH --job-name name ``` Then, the job can be launched with either command: ```python slurm.srun('echo hello!') slurm.sbatch('echo hello!') ``` ```bash >> Submitted batch job 34987 ``` While both commands are quite similar, [`srun`](https://slurm.schedmd.com/srun.html) will wait for the job completion, while [`sbatch`](https://slurm.schedmd.com/sbatch.html) will launch and disconnect from the jobs. > More information can be found in [Slurm's Quick Start Guide](https://slurm.schedmd.com/quickstart.html) and in [here](https://stackoverflow.com/questions/43767866/slurm-srun-vs-sbatch-and-their-parameters). ## Installation instructions From PyPI ```bash pip install simple_slurm ``` From Conda ```bash conda install -c conda-forge simple_slurm ``` From git ```bash pip install git+https://github.com/amq92/simple_slurm.git ``` ## Many syntaxes available ```python slurm = Slurm('-a', '3-11') slurm = Slurm('--array', '3-11') slurm = Slurm('array', '3-11') slurm = Slurm(array='3-11') slurm = Slurm(array=range(3, 12)) slurm.add_arguments(array=range(3, 12)) slurm.set_array(range(3, 12)) ``` All these arguments are equivalent! It's up to you to choose the one(s) that best suits you needs. > *"With great flexibility comes great responsability"* You can either keep a command-line-like syntax or a more Python-like one ```python slurm = Slurm() slurm.set_dependency('after:65541,afterok:34987') slurm.set_dependency(['after:65541', 'afterok:34987']) slurm.set_dependency(dict(after=65541, afterok=34987)) ``` All the possible arguments have their own setter methods (ex. `set_array`, `set_dependency`, `set_job_name`). Please note that hyphenated arguments, such as `--job-name`, need to be underscored (so to comply with Python syntax and be coherent). ```python slurm = Slurm('--job_name', 'name') slurm = Slurm(job_name='name') # slurm = Slurm('--job-name', 'name') # NOT VALID # slurm = Slurm(job-name='name') # NOT VALID ``` Moreover, boolean arguments such as `--contiguous`, `--ignore_pbs` or `--overcommit` can be activated with `True` or an empty string. ```python slurm = Slurm('--contiguous', True) slurm.add_arguments(ignore_pbs='') slurm.set_wait(False) print(slurm) ``` ```bash #!/bin/sh #SBATCH --contiguous #SBATCH --ignore-pbs ``` ### Using configuration files Let's define the *static* components of a job definition in a YAML file `default.slurm` ```yaml cpus_per_task: 15 job_name: 'name' output: '%A_%a.out' ``` Including these options with the using the `yaml` package is very *simple* ```python import yaml from simple_slurm import Slurm slurm = Slurm(**yaml.load(open('default.slurm'))) ... slurm.set_array(range(NUMBER_OF_SIMULATIONS)) ``` The job can be updated according to the *dynamic* project needs (ex. `NUMBER_OF_SIMULATIONS`). ### Using the command line For simpler dispatch jobs, a comand line entry point is also made available. ```bash simple_slurm [OPTIONS] "COMMAND_TO_RUN_WITH_SBATCH" ``` As such, both of these `python` and `bash` calls are equivalent. ```python slurm = Slurm(partition='compute.p', output='slurm.log', ignore_pbs=True) slurm.sbatch('echo \$HOSTNAME') ``` ```bash simple_slurm --partition=compute.p --output slurm.log --ignore_pbs "echo \$HOSTNAME" ``` ## Job dependencies The `sbatch` call prints a message if successful and returns the corresponding `job_id` ```python job_id = slurm.sbatch('python demo.py ' + Slurm.SLURM_ARRAY_TAKSK_ID) ``` If the job submission was successful, it prints: ``` Submitted batch job 34987 ``` And returns the variable `job_id = 34987`, which can be used for setting dependencies on subsequent jobs ```python slurm_after = Slurm(dependency=dict(afterok=job_id))) ``` ## Additional features For convenience, Filename Patterns and Output Environment Variables are available as attributes of the Simple Slurm object. See [https://slurm.schedmd.com/sbatch.html](https://slurm.schedmd.com/sbatch.html#lbAH) for details on the commands. ```python from slurm import Slurm slurm = Slurm(output=('{}_{}.out'.format( Slurm.JOB_ARRAY_MASTER_ID, Slurm.JOB_ARRAY_ID)) slurm.sbatch('python demo.py ' + slurm.SLURM_ARRAY_JOB_ID) ``` This example would result in output files of the form `65541_15.out`. Here the job submission ID is `65541`, and this output file corresponds to the submission number `15` in the job array. Moreover, this index is passed to the Python code `demo.py` as an argument. > Note that they can be accessed either as `Slurm.
A simple Python wrapper for Slurm with flexibility in mind
```python import datetime from simple_slurm import Slurm slurm = Slurm( array=range(3, 12), cpus_per_task=15, dependency=dict(after=65541, afterok=34987), gres=['gpu:kepler:2', 'gpu:tesla:2', 'mps:400'], ignore_pbs=True, job_name='name', output=f'{Slurm.JOB_ARRAY_MASTER_ID}_{Slurm.JOB_ARRAY_ID}.out', time=datetime.timedelta(days=1, hours=2, minutes=3, seconds=4), ) slurm.sbatch('python demo.py ' + Slurm.SLURM_ARRAY_TASK_ID) ``` The above snippet is equivalent to running the following command: ```bash sbatch << EOF #!/bin/sh #SBATCH --array 3-11 #SBATCH --cpus-per-task 15 #SBATCH --dependency after:65541,afterok:34987 #SBATCH --gres gpu:kepler:2,gpu:tesla:2,mps:400 #SBATCH --ignore-pbs #SBATCH --job-name name #SBATCH --output %A_%a.out #SBATCH --time 1-02:03:04 python demo.py \$SLURM_ARRAY_TASK_ID EOF ``` ## Contents + [Introduction](#introduction) + [Installation instructions](#installation-instructions) + [Many syntaxes available](#many-syntaxes-available) - [Using configuration files](#using-configuration-files) - [Using the command line](#using-the-command-line) + [Job dependencies](#job-dependencies) + [Additional features](#additional-features) - [Filename Patterns](#filename-patterns) - [Output Environment Variables](#output-environment-variables) ## Introduction The [`sbatch`](https://slurm.schedmd.com/sbatch.html) and [`srun`](https://slurm.schedmd.com/srun.html) commands in [Slurm](https://slurm.schedmd.com/overview.html) allow submitting parallel jobs into a Linux cluster in the form of batch scripts that follow a certain structure. The goal of this library is to provide a simple wrapper for these functions (`sbatch` and `srun`) so that Python code can be used for constructing and launching the aforementioned batch script. Indeed, the generated batch script can be shown by printing the `Slurm` object: ```python from simple_slurm import Slurm slurm = Slurm(array=range(3, 12), job_name='name') print(slurm) ``` ```bash >> #!/bin/sh >> >> #SBATCH --array 3-11 >> #SBATCH --job-name name ``` Then, the job can be launched with either command: ```python slurm.srun('echo hello!') slurm.sbatch('echo hello!') ``` ```bash >> Submitted batch job 34987 ``` While both commands are quite similar, [`srun`](https://slurm.schedmd.com/srun.html) will wait for the job completion, while [`sbatch`](https://slurm.schedmd.com/sbatch.html) will launch and disconnect from the jobs. > More information can be found in [Slurm's Quick Start Guide](https://slurm.schedmd.com/quickstart.html) and in [here](https://stackoverflow.com/questions/43767866/slurm-srun-vs-sbatch-and-their-parameters). ## Installation instructions From PyPI ```bash pip install simple_slurm ``` From Conda ```bash conda install -c conda-forge simple_slurm ``` From git ```bash pip install git+https://github.com/amq92/simple_slurm.git ``` ## Many syntaxes available ```python slurm = Slurm('-a', '3-11') slurm = Slurm('--array', '3-11') slurm = Slurm('array', '3-11') slurm = Slurm(array='3-11') slurm = Slurm(array=range(3, 12)) slurm.add_arguments(array=range(3, 12)) slurm.set_array(range(3, 12)) ``` All these arguments are equivalent! It's up to you to choose the one(s) that best suits you needs. > *"With great flexibility comes great responsability"* You can either keep a command-line-like syntax or a more Python-like one ```python slurm = Slurm() slurm.set_dependency('after:65541,afterok:34987') slurm.set_dependency(['after:65541', 'afterok:34987']) slurm.set_dependency(dict(after=65541, afterok=34987)) ``` All the possible arguments have their own setter methods (ex. `set_array`, `set_dependency`, `set_job_name`). Please note that hyphenated arguments, such as `--job-name`, need to be underscored (so to comply with Python syntax and be coherent). ```python slurm = Slurm('--job_name', 'name') slurm = Slurm(job_name='name') # slurm = Slurm('--job-name', 'name') # NOT VALID # slurm = Slurm(job-name='name') # NOT VALID ``` Moreover, boolean arguments such as `--contiguous`, `--ignore_pbs` or `--overcommit` can be activated with `True` or an empty string. ```python slurm = Slurm('--contiguous', True) slurm.add_arguments(ignore_pbs='') slurm.set_wait(False) print(slurm) ``` ```bash #!/bin/sh #SBATCH --contiguous #SBATCH --ignore-pbs ``` ### Using configuration files Let's define the *static* components of a job definition in a YAML file `default.slurm` ```yaml cpus_per_task: 15 job_name: 'name' output: '%A_%a.out' ``` Including these options with the using the `yaml` package is very *simple* ```python import yaml from simple_slurm import Slurm slurm = Slurm(**yaml.load(open('default.slurm'))) ... slurm.set_array(range(NUMBER_OF_SIMULATIONS)) ``` The job can be updated according to the *dynamic* project needs (ex. `NUMBER_OF_SIMULATIONS`). ### Using the command line For simpler dispatch jobs, a comand line entry point is also made available. ```bash simple_slurm [OPTIONS] "COMMAND_TO_RUN_WITH_SBATCH" ``` As such, both of these `python` and `bash` calls are equivalent. ```python slurm = Slurm(partition='compute.p', output='slurm.log', ignore_pbs=True) slurm.sbatch('echo \$HOSTNAME') ``` ```bash simple_slurm --partition=compute.p --output slurm.log --ignore_pbs "echo \$HOSTNAME" ``` ## Job dependencies The `sbatch` call prints a message if successful and returns the corresponding `job_id` ```python job_id = slurm.sbatch('python demo.py ' + Slurm.SLURM_ARRAY_TAKSK_ID) ``` If the job submission was successful, it prints: ``` Submitted batch job 34987 ``` And returns the variable `job_id = 34987`, which can be used for setting dependencies on subsequent jobs ```python slurm_after = Slurm(dependency=dict(afterok=job_id))) ``` ## Additional features For convenience, Filename Patterns and Output Environment Variables are available as attributes of the Simple Slurm object. See [https://slurm.schedmd.com/sbatch.html](https://slurm.schedmd.com/sbatch.html#lbAH) for details on the commands. ```python from slurm import Slurm slurm = Slurm(output=('{}_{}.out'.format( Slurm.JOB_ARRAY_MASTER_ID, Slurm.JOB_ARRAY_ID)) slurm.sbatch('python demo.py ' + slurm.SLURM_ARRAY_JOB_ID) ``` This example would result in output files of the form `65541_15.out`. Here the job submission ID is `65541`, and this output file corresponds to the submission number `15` in the job array. Moreover, this index is passed to the Python code `demo.py` as an argument. > Note that they can be accessed either as `Slurm.