How to run multiple python based slurm jobs together in HPC

46 views Asked by At

I need to submit 100 slurm jobs they all perform the same computing but with a slight change (the only difference is the year; all files have different years). Is there a way to submit them together without writing slurm file and running separately?

For example, I have 100 python files with the name: process1.py, process2.py, process3.py, ... and so on. I am looking for a way I can assign the hpc resources for all of them together, something like the below-

#!/bin/bash
#SBATCH -n 2
#SBATCH -p main
#SBATCH --qos main 
#SBATCH -N 1                             
#SBATCH -J name
.
.
. #other SBATCH commands
.
.
python process1.py
python process2.py
python process3.py
python process4.py....

1

There are 1 answers

4
Annie K. Lamar On BEST ANSWER

Are you looking to submit a separate job for each Python file or have all 100 files run concurrently as part of the same job?

You can always create a loop and run a Python file inside the loop:

for ((i = 1; i <= 100; i++)); do
    curr_file = "process$i.py"
    python curr_file

If you want to run the files in parallel and your system supports that, you can also use a slurm job array. Here's what that argument looks like:

#SBATCH --array=1-2 # where 2 is maximum job allocation