I am new to Slurm and I also found the related questions about this topic. However, I am still confused about several points of how to use srun. According to the official document, srun will typically first allocate resources and then run the parallel jobs. For example, I want to run 20 tasks and if I submit my job based on the following script, I am not sure how many tasks are created. Because sbatch only takes care of allocating resources instead of executing program.
#!/bin/sh
#SBATCH -n 20
#SBATCH --mpi=pmi2
#SBATCH -o myoutputfile.txt
module load mpi/mpich-x86_64
mpirun mpiprogram < inputfile.txt
If I am trying to run sequential program like the following, I am not whether there will be a difference or not. For example, I can simply remove the srun command in this script. What will happen?
#!/bin/sh
#SBATCH -n 1
#SBATCH -N 1
srun tar zxf julia-0.3.11.tar.gz
echo "prefix=/software/julia-0.3.11" > julia/Make.user
cd julia
srun make
The first example will spawn 20 tasks ; sbatch will request 20 CPUs and also set up the environment so that mpirun knows how many CPUs were requested for the job. mpirun will then spawn as many processes as were allocated (provided that OpenMPI was compiled with Slurm support).
The #SBATCH --mpi=pmi2 part is meant for srun so it will have no effect if srun is not called in the submission script.
In the second example, there will be no difference in the number of processes spawned as only one is needed. But, with srun, the output of sstat will be more reliable, the management of signals will be more precise, and the buffering of the output will be more controlled (via the srun command line options).
If you request multiple tasks, srun will instantiate that many processes. It can be an MPI program, or a sequential program that adapts its behaviour based on the SLURM_PROC_ID environment variable.
Also you can run multiple srun in the same submission script. Each instance of srun (called a "step") is then accounted separately in the accounting (sacct).
Finally, srun can use a subset of the allocation and organise the micro-scheduling of many small tasks in a single job (see the example in the srun manpage).
Related
I want to run two programs using mpi in parallel in the same job script. In SLURM I would usually just write a script for sbatch (shortened):
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4
mpirun program1 &
mpirun program2
This works fine.
The two programs will internally communicate with each other and coordinate execution. So overcommiting is fine. Moreover, they require each other and cannot run as stand-alone in the present configuration.
However, if I want to extend this to several nodes, e.g.
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=2
SLURM does not start the first job in the background. Instead, it starts in the foreground, fails because it does not find the second step and the second then also fails -- because it does not find the first.
I am a bit at a loss here because that is the suggested solution (e.g. Run a "monitor" task alongside mpi task in SLURM) to similar problems and I do not see a reason why this should not work over several nodes. Indeed it does, for instance on PBS.
You can run your Multiple Program Multiple Data (MPMD) application like this:
mpirun -np x program1 : -np y program2
How do I run setup code in a SLURM sbatch script? Can I just use two srun lines?
Are these two srun lines guaranteed to run on the same node, without cleanup inbetween?
#!/bin/bash
# Parameters
#SBATCH ...
# setup
srun cp /nfs/data $TMPDIR
# job
srun a.out $TMPDIR
The srun command will start as many instances of the command as specified with the --ntasks parameter. It is typically used with MPI programs and programs that run embarrassingly parallel workloads.
A command like srun cp ... only makes sense in the case multiple nodes are requested and only one task is running per node, so with for instance --nodes=N or --ntasks=N --ntasks-per-node=1 or a similar combination. It can be used to copy files from a network filesystem to a local filesystem.
If there is only one node and multiple tasks, the srun could cause problems by concurrently trying to write to the same file.
If there is only one task, then the srun are not really needed (except if you want to use sstat to monitor them).
In any case, consecutive srun's are run on the same sets of nodes without cleaning.
I have access to a large GPU cluster (20+ nodes, 8 GPUs per node) and I want to launch a task several times on n GPUs (1 per GPU, n > 8) within one single batch without booking full nodes with the --exclusive flag.
I managed to pre-allocate the resources (see below), but I struggle very hard with launching the task several times within the job. Specifically, my log shows no value for the CUDA_VISIBLE_DEVICES variable.
I know how to do this operation on fully booked nodes with the --nodes and --gres flags. In this situation, I use --nodes=1 --gres=gpu:1 for each srun. However, this solution does not work for the present question, the job hangs indefinitely.
In the MWE below, I have a job asking for 16 gpus (--ntasks and --gpus-per-task). The jobs is composed of 28 tasks which are launched with the srun command.
#!/usr/bin/env bash
#SBATCH --job-name=somename
#SBATCH --partition=gpu
#SBATCH --nodes=1-10
#SBATCH --ntasks=16
#SBATCH --gpus-per-task=1
for i in {1..28}
do
srun echo $(hostname) $CUDA_VISIBLE_DEVICES &
done
wait
The output of this script should look like this:
nodeA 1
nodeR 2
...
However, this is what I got:
nodeA
nodeR
...
When you write
srun echo $(hostname) $CUDA_VISIBLE_DEVICES &
the expansion of the $CUDA_VISIBLE_DEVICES variable will be performed on the master node of the allocation (where the script is run) rather than on the node targeted by srun. You should escape the $:
srun echo $(hostname) \$CUDA_VISIBLE_DEVICES &
By the way, the --gpus-per-task= appeared in the sbatch manpage in the 19.05 version. When you use it with an earlier option, I am not sure how it goes.
I want to run a script on a cluster ~200 times using srun commands in one sbatch script. Since executing the script takes some time it would be great to distribute the tasks evenly over the nodes in the cluster. Sadly, I have issues with that.
Now, I created an example script ("hostname.sh") to test different parameters in the sbatch script:
echo `date +%s` `hostname`
sleep 10
This is my sbatch script:
#SBATCH --ntasks=15
#SBATCH --cpus-per-task=16
for i in `seq 200`; do
srun -n1 -N1 bash hostname.sh &
done
wait
I would expect that hostname.sh is executed 200 times (for loop) but only 15 tasks running at the same time (--ntasks=15). Since my biggest node has 56 cores only three jobs should be able to run on this node at the same time (--cpus-per-task=16).
From the ouptut of the script I can see that the first nine tasks are distributed over nine nodes from the cluster but all the other tasks (191!) are executed on one node at the same time. The whole sbatch script execution just took about 15 seconds.
I think I misunderstand some of slurm's parameters but looking at the official documentation did not help me.
You need to use the --exclusive option of srun in that context:
srun -n1 -N1 --exclusive bash hostname.sh &
From the srun manpage:
By default, a job step has access to every CPU allocated to the job.
To ensure that distinct CPUs are allocated to each job step, use the
--exclusive option.
See also the last-but-one example in said documentation.
I've got an mpi job I run in slurm using an sbatch script which looks something like:
# request 384 processors across 16 nodes for exclusive use:
#SBATCH --exclusive
#SBATCH --ntasks-per-node=24
#SBATCH -n 384
#SBATCH -N 16
#SBATCH --time 3-00:00:00
mpirun myprog
I want to monitor the memory/cpu usage and some other behaviour of the "myprog" processes. I've written a simple script (call it "monitor") which can do this, but I'm stumped on how to use sbatch to run ONE copy of it on each allocated node, at the same time as "myprog".
I think I need to modify the above to something like:
...
srun monitor
mpirun myprog
But I'm confused about whether a) that means "monitor" will run in the background and b) how I can control where "monitor" runs.
To have monitor run 'in the background', so actually the srun is non-blocking and the subsequent mpirun command can start, you simply need to add an ampersand (&) at the end.
To make sure that program runs on the 'master node' of the allocation, just remove the srun command.
If you need that program to run on a specific node, use the -n1 --nodelist option (you probably first need to get the list of all allocated nodes first.) You should also consider using the --overcommit option of srun to avoid dedicating a full CPU to your monitoring program which I assume is not CPU-bound.