I'm completly new to slurm and debian.
I have a cluster with 27 nodes with debian each has a processor with 24 cores. On my main node runing slurmctld i have a script which takes as much resources as it can. The question is, can I run it with all the cpu's in a cluster? if so then how. I know how to run a script that uses stress package and uses all cpus on 2 nodes :
#!/bin/bash
#
#SBATCH --job-name=test
#
#SBATCH --nodes=2
#SBATCH --ntasks=48
#SBATCH --time=00:00:20
srun sudo stress --cpu 48 --timeout 20
but i have to install the stress package on every I'm using.
Related
I want to run three instances of GROMACS mdrun on three different nodes.
I have three temperatures 200,220 and 240 K and I want to run 200 K simulation on node 1, 220 K simulation on node 2 and 240 K simulation on node 3. I need to do all this in one script as I have job number limit.
How can I do that in slurm?
Currently I have:
#!/bin/bash
#SBATCH --nodes=3
#SBATCH --ntasks=3
#SBATCH --ntasks-per-node=1
#SBATCH --time=01:00:00
#SBATCH --job-name=1us
#SBATCH --error=h.err
#SBATCH --output=h.out
#SBATCH --partition=standard
as my sbatch parameters and
for i in 1 2 3
do
T=$(($Ti+($i-1)*20))
cd T_$T/1000
gmx_mpi grompp -f heating.mdp -c init_conf.gro -p topol.top -o quench.tpr -maxwarn 1
gmx_mpi mdrun -s quench.tpr -deffnm heatingLDA -v &
cd ../../
done
wait
this is how I am running mdrun but this is not running as fast I want it to run. Firstly, the mdrun does not start simultaneously but it starts in 200K then after 2-3 min it starts on 220K. Secondly, the speed is much slower as expected.
Could you all tell me how can I achieve that?
Thank you in advance.
Best regards,
Ved
You need to add a line in the slurm script
#SBATCH --nodelist=${NODENAME}
where ${NODENAME} is the name of any of nodes 1, 2 or 3
I have access to a large GPU cluster (20+ nodes, 8 GPUs per node) and I want to launch a task several times on n GPUs (1 per GPU, n > 8) within one single batch without booking full nodes with the --exclusive flag.
I managed to pre-allocate the resources (see below), but I struggle very hard with launching the task several times within the job. Specifically, my log shows no value for the CUDA_VISIBLE_DEVICES variable.
I know how to do this operation on fully booked nodes with the --nodes and --gres flags. In this situation, I use --nodes=1 --gres=gpu:1 for each srun. However, this solution does not work for the present question, the job hangs indefinitely.
In the MWE below, I have a job asking for 16 gpus (--ntasks and --gpus-per-task). The jobs is composed of 28 tasks which are launched with the srun command.
#!/usr/bin/env bash
#SBATCH --job-name=somename
#SBATCH --partition=gpu
#SBATCH --nodes=1-10
#SBATCH --ntasks=16
#SBATCH --gpus-per-task=1
for i in {1..28}
do
srun echo $(hostname) $CUDA_VISIBLE_DEVICES &
done
wait
The output of this script should look like this:
nodeA 1
nodeR 2
...
However, this is what I got:
nodeA
nodeR
...
When you write
srun echo $(hostname) $CUDA_VISIBLE_DEVICES &
the expansion of the $CUDA_VISIBLE_DEVICES variable will be performed on the master node of the allocation (where the script is run) rather than on the node targeted by srun. You should escape the $:
srun echo $(hostname) \$CUDA_VISIBLE_DEVICES &
By the way, the --gpus-per-task= appeared in the sbatch manpage in the 19.05 version. When you use it with an earlier option, I am not sure how it goes.
On newly installed and configured compute nodes in our small cluster I am unable to submit slurm jobs using a batch script and the 'sbatch' command. After submitting, the requested node changes to the 'drained' status. However, I can run the same command interactively using 'srun'.
Works:
srun -p debug --ntasks=1 --nodes=1 --job-name=test --nodelist=node6 -l echo 'test'
Does not work:
sbatch test.slurm
with test.slurm:
#!/bin/sh
#SBATCH --job-name=test
#SBATCH --ntasks=1
#SBATCH --nodes=1
#SBATCH --nodelist=node6
#SBATCH --partition=debug
echo 'test'
It gives me:
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
debug up 1:00:00 1 drain node6
and I have to resume the node.
All nodes run Debian 9.8, use Infiniband and NIS.
I have made sure that all nodes have the same config, version of packages and daemons running. So, I don't see what I am missing.
Seems like the issue was connected to the present NIS. Just needed to add to the end of /etc/passwd this line:
+::::::
and restart slurmd on the node:
/etc/init.d/slurmd restart
I've installed Slurm on a 2-node cluster. Both nodes are compute nodes, one is the controller also. I am able to successfully run srun with multiple jobs at once. I am running GPU jobs and have confirmed I can get multiple jobs running on multiple GPUs with srun, up to the number of GPUs in the systems.
However, when I try running sbatch with the same test file, it will only run one batch job, and it only runs on the compute node which is also the controller. The others fail, with an ExitCode of 1:0 in the sacct summary. If I try forcing it to run on the compute node that's not the controller, it won't run and shows the 1:0 exit code. However, just using srun will run on any compute node.
I've made sure the /etc/slurm/slurm.conf files are correct with the specs of the machines. Here is the sbatch .job file I am using:
#!/bin/bash
#SBATCH --job-name=tf_test1
#SBATCH --output=/storage/test.out
#SBATCH --error=/storage/test.err
#SBATCH --ntasks=2
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=2000
##SBATCH --mem=10gb
#SBATCH --gres=gpu:1
~/anaconda3/bin/python /storage/tf_test.py
Maybe there is some limitation with sbatch I don't know about?
sbatch creates a job allocation and launches what is called the 'batch step'.
If you aren't familiar with what a job step is, I recommend this page: https://slurm.schedmd.com/quickstart.html
The batch step runs the script passed to it from sbatch. The only way to launch additional job steps is to invoke srun inside the batch step. In your case, it would be
srun ~/anaconda3/bin/python /storage/tf_test.py
This will create a job step running tf_test.py on each task in the allocation. Note that while the command is the same as when you run srun directly, it detects that is inside an allocation via environment variables from sbatch. You can split up the allocation into multiple job steps by running srun with flags like -n[num tasks] instead. ie
#!/bin/bash
#SBATCH --ntasks=2
srun --ntasks=1 something.py
srun --ntasks=1 somethingelse.py
I don't know if you're having any other problems because you didn't post any other error messages or logs.
If using srun on the second node works and using sbatch with the submission script you mention fails without any output written, the most probable reason would be that /storage does not exist, or is not writable by the user, on the second node.
The slurmd logs on the second node should be explicit about this. The default location is /var/log/slurm/slurmd.log, but check the output of scontrol show config| grep Log for definitive information.
Another probable cause that lead to the same behaviour would be that the user is not defined or has a different UID on the second node (but then srun would fail too)
#damienfrancois answer was closest and maybe even correct. After making sure the /storage location was available on all nodes, things run with sbatch. The biggest issue was the /storage location is shared via NFS, but it was read-only for the compute nodes. This had to be changed in /etc/exports to look more like:
/storage *(rw,sync,no_root_squash)
Before it was ro...
The job file I have that works is also a bit different. Here is the current .job file:
#!/bin/bash
#SBATCH -N 1 # nodes requested
#SBATCH --job-name=test
#SBATCH --output=/storage/test.out
#SBATCH --error=/storage/test.err
#SBATCH --time=2-00:00
#SBATCH --mem=36000
#SBATCH --qos=normal
#SBATCH --mail-type=ALL
#SBATCH --mail-user=$USER#nothing.com
#SBATCH --gres=gpu
srun ~/anaconda3/bin/python /storage/tf_test.py
I'm using slurm on a cluster where single partitions have dissimilar nodes. Specifically, the nodes have varying # CPUs. My code is a single-core application being used for a parameter sweep and thus I want to fully use an (eg.) 32 CPU node by sending it 32 jobs.
How can I select nodes (within a named partition) that have a specified number of CPUs?
I know my Partition configuration via
sinfo -e -p <partition_name> -o "%9P %3c %.5D %6t " -t idle,mix
PARTITION CPU NODES STATE
<partition_name> 16 63 mix
<partition_name> 32 164 mix
But if I use a submissions script like
[snip preamble]
#SBATCH --partition <partition_name> # resource to be used
#SBATCH --nodes 1 # Num nodes
#SBATCH -N 1 # Num cores per job
#SBATCH --cores-per-socket=32 # Cores per node
the slurm scheduler says
sbatch: error: Socket, core and/or thread specification can not be satisfied
PS. A minor correction: my code to get partition info isn't the best. Just in case anyone looks up this question later, here is a better query (using X,Y for socket, core counts) that helps identify the problem that damien's excellent answer solved
sinfo -e -p <partition_name> -o "%9P %3c %.3D %6t %2X %2Y %N" -t idle,mix
To strictly answer your question: With
#SBATCH --cores-per-socket=32
you request 32 core per socket, which is per physical CPU. I guess those machines have two CPUs so you should request something like
#SBATCH --sockets-per-node=2
#SBATCH --cores-per-socket=16
Another way of requesting the same is to ask for
#SBATCH --nodes 1
#SBATCH --tasks-per-node 32
But please note that, if your cluster allows node sharing, what you do seems more suited for job arrays :
#SBATCH --ntasks 1
#SBATCH --arrays 1-32
IDS=($(seq RUN_ID_FIRST RUN_ID_LAST))
RUN_ID=${IDS[$SLURM_ARRAY_TASK_ID]}
matlab -nojvm -singleCompThread -r "try myscript(${RUN_ID}); catch me; disp(' *** error'); end; exit" > ./result_${RUN_ID}
This will launch 32 independent jobs, each taking care of running the Matlab script for one value of the parameter sweep.
To answer your additional question; if a 32-process job is scheduled on a 16-CPU node, the node will be overloaded, and depending on the containment solution set up by the administrators, your processes might impact others' jobs and slow them down.