How to submit parallel job steps with SLURM? - slurm

I have the following SLURM job script named gzip2zipslurm.sh:
#!/bin/bash
#SBATCH --mem 70G
#SBATCH --ntasks 4
echo "Task 1"
srun -n1 java -Xmx10g -jar tar2zip-1.0.0-jar-with-dependencies.jar articles.A-B.xml.tar.gz &
echo "Task 2"
srun -n1 java -Xmx10g -jar tar2zip-1.0.0-jar-with-dependencies.jar articles.C-H.xml.tar.gz &
echo "Task 3"
srun -n1 java -Xmx10g -jar tar2zip-1.0.0-jar-with-dependencies.jar articles.I-N.xml.tar.gz &
echo "Task 4"
srun -n1 java -Xmx10g -jar tar2zip-1.0.0-jar-with-dependencies.jar articles.O-Z.xml.tar.gz &
echo "Waiting for job steps to end"
wait
echo "Script complete"
I submit it to SLURM by sbatch gzip2zipslurm.sh.
When I do, the output of the SLURM log file is
Task 1
Task 2
Task 3
Task 4
Waiting for job steps to end
The tar2zip program reads the given tar.gz file an re-packages it as a ZIP file.
The Problem: Only one CPU (out of 16 available on an idle node) is doing any work. With top I can see that all in all 5 srun commands have been started (4 for my tasks and 1 implicit for the sbatch job, I guess) but there is only one Java process. I can also see it on the files being worked on, only one is written.
How do I manage that all 4 tasks are actually executed in parallel?
Thanks for any hints!

The issue might be with the memory reservation. In the submission script, you set --mem=70GB, that is the global memory usage of the job.
When srun is used within a submission script, it inherits parameters from sbatch, including the --mem=70GB. So you actually implicitly run the following.
srun --mem 70G -n1 java -Xmx10g -jar ...
Try explicitly stating the memory to 70GB/4 with:
srun --mem 17G -n1 java -Xmx10g -jar ...
Also, as per the documentation, you should use --exclusive with srun in such a context.
srun --exclusive --mem 17G -n1 java -Xmx10g -jar ...
This option can also be used when initiating more than one job step
within an existing resource allocation, where you want separate
processors to be dedicated to each job step. If sufficient processors
are not available to initiate the job step, it will be deferred. This
can be thought of as providing a mechanism for resource management to
the job within it's allocation.

Related

Slurm: srun inside sbatch is ignored / skipped Can anyone explain why?

I'm still exploring how to work with the Slurm scheduler and this time I really got stuck. The following batch script somehow doesn't work:
#!/usr/bin/env bash
#SBATCH --job-name=parallel-plink
#SBATCH --mem=400GB
#SBATCH --ntasks=4
cd ~/RS1
for n in {1..4};
do
echo "Starting ${n}"
srun --input none --exclusive --ntasks=1 -c 1 --mem-per-cpu=100G plink --memory 100000 --bfile RS1 --distance triangle bin --parallel ${n} 4 --out dt-output &
done
Since most of the SBATCH options are inside the batch script the invocation is just: 'sbatch script.sh'
The slurm-20466.out only contains the four echo'ing outputs: cat slurm-20466.out
Starting 1
Starting 2
Starting 3
Starting 4
I double checked the command without srun and that works without errors.
I must confess I am also responsible for the Slurm scheduler configuration itself. Let me know if I could try to change anything or when more information is needed.
You start your srun commands in the background to have them run in parallel. But you never wait for the commands to finish.
So the loop runs through very quickly, echoes the "Starting ..." lines, starts the srun command in the background and afterwards finishes. After that, your sbatch-script is done and terminates successfully, meaning that your job is done. With that, your allocation is revoked and your srun commands are also terminated. You might be able to see that they started with sacct.
You need to instruct the batch script to wait for the work to be done before it terminates, by waiting for the background processes to finish. To do that, you simply have to add a wait command in your script at the end:
#!/usr/bin/env bash
#SBATCH --job-name=parallel-plink
#SBATCH --mem=400GB
#SBATCH --ntasks=4
cd ~/RS1
for n in {1..4};
do
echo "Starting ${n}"
srun --input none --exclusive --ntasks=1 -c 1 --mem-per-cpu=100G plink --memory 100000 --bfile RS1 --distance triangle bin --parallel ${n} 4 --out dt-output &
done
wait

Slurm: Why do we need Srun in Sbatch script file?

I am new to Slurm and I also found the related questions about this topic. However, I am still confused about several points of how to use srun. According to the official document, srun will typically first allocate resources and then run the parallel jobs. For example, I want to run 20 tasks and if I submit my job based on the following script, I am not sure how many tasks are created. Because sbatch only takes care of allocating resources instead of executing program.
#!/bin/sh
#SBATCH -n 20
#SBATCH --mpi=pmi2
#SBATCH -o myoutputfile.txt
module load mpi/mpich-x86_64
mpirun mpiprogram < inputfile.txt
If I am trying to run sequential program like the following, I am not whether there will be a difference or not. For example, I can simply remove the srun command in this script. What will happen?
#!/bin/sh
#SBATCH -n 1
#SBATCH -N 1
srun tar zxf julia-0.3.11.tar.gz
echo "prefix=/software/julia-0.3.11" > julia/Make.user
cd julia
srun make
The first example will spawn 20 tasks ; sbatch will request 20 CPUs and also set up the environment so that mpirun knows how many CPUs were requested for the job. mpirun will then spawn as many processes as were allocated (provided that OpenMPI was compiled with Slurm support).
The #SBATCH --mpi=pmi2 part is meant for srun so it will have no effect if srun is not called in the submission script.
In the second example, there will be no difference in the number of processes spawned as only one is needed. But, with srun, the output of sstat will be more reliable, the management of signals will be more precise, and the buffering of the output will be more controlled (via the srun command line options).
If you request multiple tasks, srun will instantiate that many processes. It can be an MPI program, or a sequential program that adapts its behaviour based on the SLURM_PROC_ID environment variable.
Also you can run multiple srun in the same submission script. Each instance of srun (called a "step") is then accounted separately in the accounting (sacct).
Finally, srun can use a subset of the allocation and organise the micro-scheduling of many small tasks in a single job (see the example in the srun manpage).

GPU allocation within a SBATCH

I have access to a large GPU cluster (20+ nodes, 8 GPUs per node) and I want to launch a task several times on n GPUs (1 per GPU, n > 8) within one single batch without booking full nodes with the --exclusive flag.
I managed to pre-allocate the resources (see below), but I struggle very hard with launching the task several times within the job. Specifically, my log shows no value for the CUDA_VISIBLE_DEVICES variable.
I know how to do this operation on fully booked nodes with the --nodes and --gres flags. In this situation, I use --nodes=1 --gres=gpu:1 for each srun. However, this solution does not work for the present question, the job hangs indefinitely.
In the MWE below, I have a job asking for 16 gpus (--ntasks and --gpus-per-task). The jobs is composed of 28 tasks which are launched with the srun command.
#!/usr/bin/env bash
#SBATCH --job-name=somename
#SBATCH --partition=gpu
#SBATCH --nodes=1-10
#SBATCH --ntasks=16
#SBATCH --gpus-per-task=1
for i in {1..28}
do
srun echo $(hostname) $CUDA_VISIBLE_DEVICES &
done
wait
The output of this script should look like this:
nodeA 1
nodeR 2
...
However, this is what I got:
nodeA
nodeR
...
When you write
srun echo $(hostname) $CUDA_VISIBLE_DEVICES &
the expansion of the $CUDA_VISIBLE_DEVICES variable will be performed on the master node of the allocation (where the script is run) rather than on the node targeted by srun. You should escape the $:
srun echo $(hostname) \$CUDA_VISIBLE_DEVICES &
By the way, the --gpus-per-task= appeared in the sbatch manpage in the 19.05 version. When you use it with an earlier option, I am not sure how it goes.

How to distribute slurm tasks evenly over the nodes?

I want to run a script on a cluster ~200 times using srun commands in one sbatch script. Since executing the script takes some time it would be great to distribute the tasks evenly over the nodes in the cluster. Sadly, I have issues with that.
Now, I created an example script ("hostname.sh") to test different parameters in the sbatch script:
echo `date +%s` `hostname`
sleep 10
This is my sbatch script:
#SBATCH --ntasks=15
#SBATCH --cpus-per-task=16
for i in `seq 200`; do
srun -n1 -N1 bash hostname.sh &
done
wait
I would expect that hostname.sh is executed 200 times (for loop) but only 15 tasks running at the same time (--ntasks=15). Since my biggest node has 56 cores only three jobs should be able to run on this node at the same time (--cpus-per-task=16).
From the ouptut of the script I can see that the first nine tasks are distributed over nine nodes from the cluster but all the other tasks (191!) are executed on one node at the same time. The whole sbatch script execution just took about 15 seconds.
I think I misunderstand some of slurm's parameters but looking at the official documentation did not help me.
You need to use the --exclusive option of srun in that context:
srun -n1 -N1 --exclusive bash hostname.sh &
From the srun manpage:
By default, a job step has access to every CPU allocated to the job.
To ensure that distinct CPUs are allocated to each job step, use the
--exclusive option.
See also the last-but-one example in said documentation.

parallel but different Slurm srun job step invocations not working

I'd like to run the same program on a large number of different input files. I could just submit each as a separate Slurm submission, but I don't want to swamp the queue by dumping 1000s of jobs on it at once. I've been trying to figure out how to process the same number of files by instead creating an allocation first, then within that allocation looping over all the files with srun, giving each invocation a single core from the allocation. The problem is that no matter what I do, only one job step runs at a time. The simplest test case I could come up with is:
#!/usr/bin/env bash
srun --exclusive --ntasks 1 -c 1 sleep 1 &
srun --exclusive --ntasks 1 -c 1 sleep 1 &
srun --exclusive --ntasks 1 -c 1 sleep 1 &
srun --exclusive --ntasks 1 -c 1 sleep 1 &
wait
It doesn't matter how many cores I assign the allocation:
time salloc -n 1 test
time salloc -n 2 test
time salloc -n 4 test
it always takes 4 seconds. Is it not possible to have multiple job steps execute in parallel?
It turned out to be that the default memory per cpu was not defined, so even single core jobs were running by reserving all the node's RAM.
Setting DefMemPerCPU, or specifying explicit RAM reservations did the trick.
Beware that in that scenario, you measure both the running time and the waiting time. Your submission script should look like this:
#!/usr/bin/env bash
time {
srun --exclusive --ntasks 1 -c 1 sleep 1 &
srun --exclusive --ntasks 1 -c 1 sleep 1 &
srun --exclusive --ntasks 1 -c 1 sleep 1 &
srun --exclusive --ntasks 1 -c 1 sleep 1 &
wait
}
and simply submit with
salloc -n 1 test
salloc -n 2 test
salloc -n 4 test
You then should observe the difference, along with messages such as srun: Job step creation temporarily disabled, retrying when using n<4.
Since the OP solved his issue but didn't provide the code, I'll share my take on this problem below.
In my case, I encountered the error/warning step creation temporarily disabled, retrying (Requested nodes are busy). This is because, the srun command that executed first, allocated all the memory. The same cause as encountered by the OP. To solve this, one first optionally(?) specify the total memory allocation for sbatch (if you are using an sbatch script):
#SBATCH --ntasks=4
#SBATCH --mem=[XXXX]MB
And then specify the memory use per srun task:
srun --exclusive --ntasks=1 --mem-per-cpu [XXXX/4]MB sleep 1 &
srun --exclusive --ntasks=1 --mem-per-cpu [XXXX/4]MB sleep 1 &
srun --exclusive --ntasks=1 --mem-per-cpu [XXXX/4]MB sleep 1 &
srun --exclusive --ntasks=1 --mem-per-cpu [XXXX/4]MB sleep 1 &
wait
I didn't specify CPU count for srun because in my sbatch script I included #SBATCH --cpus-per-task=1. For the same reason I suspect you could use --mem instead of --mem-per-cpu in the srun command, but I haven't tested this configuration.

Resources