how to see the command file in slurm - slurm

If I submitted a job using slurm, how can a see the full command file that was submitted, usually a bash script that start with :
#SBATCH --job-name=
how can I use slurm command to see the content of this script file (I know I can do scontorol show jobId -xxx and see where the script is, but assuming it was deleted or changed, and I still want to see the original script that was submitted to slurm)
Thanks.

On older versions of Slurm (17.02 and before), you can do scontrol show -dd job <jobid> to have Slurm display the content of the submission script it stored, and in newer versions of Slurm (17.11 and after), you would use scontrol write batch_script <jobid> -.

Related

How would you check if SLURM or MOAB/Torque is available on an environment?

The title kind of says it all. I'm looking for a command line test to check if either SLURM, or MOAB/Torque is available for submitting jobs too.
My thought is to check if the command qstat finishes with exit code, or if squeue finished with exit code zero. Would this be the best way of doing this?
One of the most lightweight way to do that is to test for the presence of sbatch for instance, with
which sbatch
The which command exits with 0 exit code if the command is found in the PATH.
Make sure to test in order, as, for instance, a Slurm cluster could have a qsub command available to emulate PBS or Torque.

Snakemake and sbatch

I have a Snakefile that has a rule which sends 7 different shell commands.
I want to run each of these shell commands in sbatch and I want them to run in different slurm nodes.
Right now when I include sbatch inside the shell command in the Snakemake rule I do not get the desired output file because it takes awhile to run and when sbatch returns the command is still running. I think Snakemake thinks that I don't have the required output file because it thinks that the command "finished executing" before the submitted job completed.
What can I do to submit each rule in one slurm node using sbatch command in Snakemake file
I suspect that what you are doing is:
rule one:
input:
...
output:
...
shell:
"""
sbatch [sbatch-options] "some-command-or-script"
"""
What you want maybe is:
rule one:
input:
...
output:
...
shell:
"""
some-command-or-script
"""
To be executed as
snakemake --cluster "sbatch [sbatch-options]"
In this way every rule will send its jobs to the cluster and snakemake will handle them. If you want a rule to execute its jobs locally (not via sbatch) mark that rule with the directive localrule (check documentation for more detail)

Import bash variables into slurm script

I have seen similar questions, but not exactly the same as mine: Use Bash variable within SLURM sbatch script, because I am not talking about slurm parameters.
I want to launch a slurm job for each of my sample files, so imagine I have 3 vcfs and I want to run a job for each of them:
I created a script to loop through a file in which I wrote sampleIds to run another script with each sample, which would perfectly work if I wanted to run it directly with bash:
while read line
do
sampleID="${line[0]}"
myscript.sh $sampleID
The problem is that I need to run the script with slurm, so is there any way to indicate slurm the bash variable that it should include?
I was trying this, but it is not working:
sbatch myscrip.sh --export=$sampleID
Okay, I've solved it:
sbatch --export=sampleID=$sampleID myscript.sh

Bash Script for Submitting Job on Cluster

I am trying to write a script so I can use the 'qsub' command to submit a job to the cluster.
Pretty much, once I get into the cluster, I go to the directory with my files and I do these steps:
export PATH=$PATH:$HOME/program/bin
Then,
program > run.log&
Is there any way to make this into a script so I am able to submit the job to the queue?
Thanks!
Putting the lines into a bash script and then running qsub myscript.sh should do it.

Stop slurm sbatch from copying script to compute node

Is there a way to stop sbatch from copying the script to the compute node. For example when I run:
sbatch --mem=300 /shared_between_all_nodes/test.sh
test.sh is copied to /var/lib/slurm-llnl/slurmd/etc/ on the executing compute node. The trouble with this is there are other scripts in /shared_between_all_nodes/ that test.sh needs to use and I would like to avoid hard coding the path.
In sge I could use qsub -b y to stop it from copying the script to the compute node. Is there a similar option or config in slurm?
Using sbatch --wrap is a nice solution for this
sbatch --wrap /shared_between_all_nodes/test.sh
quotes are required if the script has parameters
sbatch --wrap "/shared_between_all_nodes/test.sh param1 param2"
from sbatch docs http://slurm.schedmd.com/sbatch.html
--wrap=
Sbatch will wrap the specified command string in a simple "sh" shell script, and submit that script to the slurm controller. When --wrap is used, a script name and arguments may not be specified on the command line; instead the sbatch-generated wrapper script is used.
The script might be copied there, but the working directory will be the directory in which the sbatch command is launched. So if the command is launched from /shared_between_all_nodes/ it should work.
To be able to lauch sbatch form anywhere, use this option
-D, --workdir=<directory>
Set the working directory of the batch script to directory before
it is executed.
like
sbatch --mem=300 -D /shared_between_all_nodes /shared_between_all_nodes/test.sh

Resources