Setting sbatch environment variables with srun - linux

In a blog post by Pierre Lindenbaum, srun is called within a Makefile to run jobs. I rely on this technique, but it makes no use of sbatch at all, so I am missing the chance to set sbatch-like environment variables. Where can I put the following so SLURM knows what to do?
#SBATCH -J testing
#SBATCH -A account
#SBATCH --time=1:00:00
#SBATCH --cpus-per-task=1
#SBATCH --begin=now
#SBATCH --mem=1G
#SBATCH -C sb

The srun command accepts nearly all of the sbatch parameters (with the notable exception of --array). In the referred blog post, these arguments are set at the line:
.SHELLFLAGS= -N1 -n1 bash -c
so you would write
.SHELLFLAGS= -J testing -A account --time=1:00:00 --cpus-per-task --begin=now --mem=1G -C sb bash -c
Note that if you specify --cpu-per-task=1, and you keep the default of one tasks, it probably means that nodes are shared in your setup ; in that case, --mem-per-cpu=1G makes more sense than --mem=1G

Related

linux slurm - separate .out files for tasks run in paralell on 1 node

I am running jobs in parallel on linux using slurm by requesting a node and running one task per cpu.
However, the output as specified joins both streams into the single out file. I tried the %t flag on the epxectation it would separate the tasks, but it just logs everything in the output file with _0 appended (e.g. sample_output__XXX_XX_0.out).
Any advice on how to best generate a separate .out log per task would be much appreciated
#!/bin/bash
#SBATCH --job-name=recon_all_06172021_1829
#SBATCH --output=/path/recon_all_06172021_1829_%A_%a_%t.out
#SBATCH --error=/path/recon_all_06172021_1829_%A_%a.err
#SBATCH --ntasks=2
#SBATCH --ntasks-per-node=2
#SBATCH --nodes=1
#SBATCH --cpus-per-task=1
#SBATCH --time=23:59:00
#! Always keep the following echo commands to monitor CPU, memory usage
echo "SLURM_MEM_PER_CPU: $SLURM_MEM_PER_CPU"
echo "SLURM_MEM_PER_NODE: $SLURM_MEM_PER_NODE"
echo "SLURM_JOB_NUM_NODES: $SLURM_JOB_NUM_NODES"
echo "SLURM_NNODES: $SLURM_NNODES"
echo "SLURM_NTASKS: $SLURM_NTASKS"
echo "SLURM_CPUS_PER_TASK: $SLURM_CPUS_PER_TASK"
echo "SLURM_JOB_CPUS_PER_NODE: $SLURM_JOB_CPUS_PER_NODE"
command 1 &
command 2
wait
You can redirect the standard output from the command itself, for example:
command 1 > file1 2>&1
command 2 > file2 2>&1
Not as neat as using the sbatch filename patterns, but it will separate the output from each command.

passing a string argument to name a job in SLURM

I want to pass a parameter to as bash script in a cluster in order to name the job. I tried this:
#!/bin/bash
#SBATCH -J "$1" #<--- to name the job with the first parameter
#SBATCH --partition=shortq
#SBATCH -o %x-%j.out
#SBATCH -e %x-%j.err
echo "this is a test job named" $1
Gate main.mac
When I launch the job with
sbatch my_script.sh test_sript
I'm getting a file named $1-23472.out . It appears that "$1" didn't be interpreted. How can I have a file named "test_script-23472.out" ?
Also, is the line Gate main.mac mandatory? Can anyone explains me why we should put it ?
Many thanks
You probably can't do it exactly as you want to, but here's a solution that comes pretty close:
Batch script:
#!/bin/bash
#SBATCH --partition=shortq
#SBATCH -o %x-%j.out
#SBATCH -e %x-%j.err
echo "this is a test job named" $SLURM_JOB_NAME
(rest of your script here)
Submit with:
$ sbatch -J jobname my_script.sh
Slurm will not interpret the Bash variable in the comments. Bash either since it is in a comment.
One solution is a construct like this for submission:
ARG="<something>" sbatch -J "$ARG" my_script.sh test_sript "$ARG"
As for the Gate main.mac line, it is used to start the Gate program with main.mac as argument.
This is how I've been formatting slurm scripts to parse bash variables as job names.
#!/bin/bash
MYVAR=$1
sbatch --export=ALL -J ${MYVAR} --wrap="run something"

set values ​for job variables from another file

I wanted to indicate the name and other values ​​of some variables of a job from another file but I get an error.
sbatch: error: Unable to open file 10:12:35
file.sh
#!/bin/bash
DATE=`date '+%Y-%m-%d %H:%M:%S'`
name='test__'$DATE
sbatch -J $name -o $name'.out' -e $name'.err' job.sh
job.sh
#!/bin/bash
#SBATCH --job-name=test
#SBATCH --nodes=1 # number of nodes
#SBATCH --ntasks-per-node=2 # number of cores
#SBATCH --output=.out
#SBATCH --error=.err
#module load R
Rscript script.R
script.R
for(i in 1:1e6){print(i)}
You are wrongly quoting the variables and the space requested in the date is creating two arguments to sbatch, hence he is complaining about that wrong parameter.
If I were you, I would avoid the space (as a general rule, cause it is more error prone and always requires quoting):
file.sh:
#!/bin/bash
DATE=$(date '+%Y-%m-%dT%H:%M:%S')
name="test__$DATE"
sbatch -J "$name" -o "${name}.out" -e "${name}.err" job.sh

SLURM job script for multiple nodes

I would like to request for two nodes in the same cluster, and it is necessary that both nodes are allocated before the script begins.
In the slurm script, I was wondering if there is a way to launch job-A on a given node and the job-B on the second node with a small delay or simultaneously.
Do you have suggestions on how this could be possible? This is how my script is right now.
#!/bin/bash
#SBATCH --job-name="test"
#SBATCH -D .
#SBATCH --output=./logs_%j.out
#SBATCH --error=./logs_%j.err
#SBATCH --nodelist=nodes[19,23]
#SBATCH --time=120:30:00
#SBATCH --partition=AWESOME
#SBATCH --wait-all-nodes=1
#launched on Node 1
ifconfig > node19.txt
#Launched on Node2
ifconfig >> node23.txt
In other words, if I request for two nodes, how do i run two different jobs on the two nodes simultaneously? Could it be that we deploy it as job steps as given in the last part of srun manual (MULTIPLE PROGRAM CONFIGURATION).. In that context, "-l" isn't defined.
I'm assuming that when you say job-A and job-B you are refering the two echos in the script. I'm also assuming that the setup you show us is working, but without starting the jobs in the proper nodes and serializing the execution (I have the feeling that the requested resources are not clear, there is missing information to me, but if SLURM does not complain, then everything is OK). You should also be careful in the proper writing of the redirected output. If the first job opens the redirection after the second job, it will truncate the file and you will lose the second job output.
For them to be started in the appropriate nodes, run the commands through srun:
#!/bin/bash
#SBATCH --job-name="test"
#SBATCH -D .
#SBATCH --output=./logs_%j.out
#SBATCH --error=./logs_%j.err
#SBATCH --nodelist=nodes[19,23]
#SBATCH --time=120:30:00
#SBATCH --partition=AWESOME
#SBATCH --wait-all-nodes=1
#launched on Node 1
srun --nodes=1 echo 'hello from node 1' > test.txt &
#Launched on Node2
srun --nodes=1 echo 'hello from node 2' >> test.txt &
That did the job! the files ./com_19.bash and ./com_23.bash are acting as binaries.
#!/bin/bash
#SBATCH --job-name="test"
#SBATCH -D .
#SBATCH --output=./logs_%j.out
#SBATCH --error=./logs_%j.err
#SBATCH --nodelist=nodes[19,23]
#SBATCH --time=120:30:00
#SBATCH --partition=AWESOME
#SBATCH --wait-all-nodes=1
# Launch on node 1
srun -lN1 -n1 -r 1 ./com_19.bash &
# launch on node 2
srun -lN1 -r 0 ./com_23.bash &
sleep 1
squeue
squeue -s
wait

Insert system variables to SBATCH

I would like to ask you if is possible to pass global system variables to #SBATCH tags.
I would like to do somethink like that
SBATCH FILE
#!/bin/bash -l
ARG=64.dat
NODES=4
TASK_PER_NODE=8
NP=$((NODES*TASK_PER_NODE))
#SBATCH -J 'MPI'+'_'+$NODES+'_'+$TASK_PER_NODE
#SBATCH -N $NODES
#SBATCH --ntasks-per-node=$TASK_PER_NODE
It isn't workink so that is why I ask you.
Remember that SBATCH parameter lines are viewed by Bash as comments, so it will not try to interpret them at all.
Furthermore, the #SBATCH directives must be before any other Bash command for Slurm to handle them.
Alternatives include setting the parameters in the command line:
NODES=4 sbatch --nodes=$NODES ... submitscript.sh
or passing the submission script through stdin:
#!/bin/bash -l
ARG=64.dat
NODES=4
TASK_PER_NODE=8
NP=$((NODES*TASK_PER_NODE))
sbatch <<EOT
#SBATCH -J "MPI_$NODES_$TASK_PER_NODE"
#SBATCH -N $NODES
#SBATCH --ntasks-per-node=$TASK_PER_NODE
srun ...
EOT
In this later case, you will need to run the submission script rather than handing it to sbatch since it will run sbatch itself. Note also that string concatenation in Bash is not achieved with the + sign.

Resources