What should I write in qsub for checkpointing automatically? - linux

checkpoint.mat
is my checkpoint file.
job.m is my matlab file.
When my job exceeds 24 hours, it is terminated by the server. I implemented checkpointing in my own matlab file.
But what should I write in my qsub file?
Here is what a link, https://wikis.nyu.edu/display/NYUHPC/Tutorial+-+Submitting+a+job+using+qsub, wrote:
[-c checkpoint_options]
n No checkpointing is to be performed.
s Checkpointing is to be performed only when the server executing the job is shutdown.
c Checkpointing is to be performed at the default minimum time for the server executing
the job.
c=minutes
Checkpointing is to be performed at an interval of minutes, which is the integer number
of minutes of CPU time used by the job. This value must be greater than zero.
[-C directive_prefix] [-d path] [-D path] [-e path] [-f] [-h]
But from here, I still cannot figure out how to write checkpointing once the job exceed the maximum allowed time, which is 24 hrs in my case. I want the job to be resubmitted once every 24 hours, from the state of checkpointing each time. I am also not from NYU, so is there a different syntax in qsub file to specify checkpointing?
This is what I wrote in my pbs:
.....
#PBS -c c=1440 minutes
....
But it does not work.

Related

How can I set the time of a grouped job in the Snakemake file?

I have a rule that I want to run many times, e.g.,
rule my_rule:
input:
expand(my_input, x=xs)
output:
expand(my_output, x=xs)
threads: 1
resources:
time = T
shell:
'''
{my_command}
'''
rule run_all:
input:
expand(my_output, x=xs)
where T is an integer specifying the maximum number of minutes to allocate to this rule.
I want to parallelize this on SLURM and so run a command like
snakemake run_all --groups my_rule=my_rule --group-components my_rule=N --jobs 1 --profile SLURM
where N is an integer specifying the number of jobs to run in parallel.
Doing so asks SLURM for T*N minutes. But since the jobs are running in parallel all I want is T minutes.
At the moment I solve this by removing the time = T line and editing my ~./config/snakemake/slurm/cluster_config.yaml file (setting time: T). But this is a pain to do and makes my pipeline less repeatable.
Is there a way to set the resources in the Snakemake file so that when grouping N copies of a rule that takes time T we only ask SLURM for T time? Or am I going about this the wrong way?

how to generate batch files in a loop in Python (not a loop in a batch file) with slightly changed parameters per iteration

I am a researcher that needs to run files for a set of years on a SLURM system (high performance computing center). The available nodes for long compute times have a long queue. I have 42 years to run, and the only way to run them that gets my files processed quickly (due to the wait times, and that this is many GB of data, it takes time), is to submit them individually, one batch file per year, as jobs. I cannot include multiple years in a single batch file, or I have to wait a week in the queue to run my data due to the time I have to reserve per batch file. This is the fastest way my university's system lets me run my data.
To do this, I have 2 lines in my batch script that I have to change every time: the name of the job, and the last line which is the python script name plus a parameter being passed to it (the year)
like so: pythonscript.py 2020.
I would like to generate batch files with a python or other script I can run, where it loops over a list of years and just changes the job name to jobNameYEAR and changes the last line to pythonscript.py YEAR, writes that to a file jobNameYEAR.sl, then continues in a loop to output the next batch file. ...Even better if it can write the batch file and submit the job (sjob jobNameYEAR) before continuing in the loop, but I realize maybe this is asking too much. But separately...
Is there a way to submit jobs in a loop once these files are created? E.g. loop through the year list and submit sjob jobName2000.sl, sjob jobName2001.sl, sjob jobName2002.sl
I do not want a loop in the batch file changing the variable, this would mean reserving too many hours on the SLURM system for a single job. I want a loop outside of the batch file that generates multiple batch files I can submit as jobs.
Thank you for your help!
This is what one of my .sl files looks like, it works fine, I just want to generate these files in a loop so I can stop editing them by hand:
#!/bin/bash -l
# The -l above is required to get the full environment with modules
# Set the allocation to be charged for this job
# not required if you have set a default allocation
#SBATCH -A MYFOLDER
# The name of the job
#SBATCH -J jobNameYEAR
# 24 hour wall-clock time will be given to this job
#SBATCH -t 3:00:00
# Job partition
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=6
#SBATCH --mem=30GB
#SBATCH -p main
# load the anaconda module
ml PDC/21.11
ml Anaconda3/2021.05
conda activate myEnv
python pythonfilename.py YEAR
Create a script with the following content (let's call it chainsubmit.sh):
#!/bin/bash
SCRIPT=${1?Usage: $0 script.slum arg1 arg2 ...}
shift
ARG=$1
ID=$(sbatch --job-name=jobName$ARG --parsable $SCRIPT $ARG)
shift
for ARG in "$#"; do
ID=$(sbatch --job-name=jobName$ARG --parsable --dependency=afterok:${ID%%;*} $SCRIPT $ARG)
done
Then, adapt your script so that the last line
python pythonfilename.py YEAR
is replaced with
python pythonfilename.py $1
Finally submit all the jobs with
./chainsubmit.sh jobName.sl {2000..2004}
for instance for YEAR ranging from 2000 to 2004
… script I can run, where it loops over a list of years and just changes the job name to jobNameYEAR and changes the last line to pythonscript.py YEAR, writes that to a file jobNameYEAR.sl… submit the job (sjob jobNameYEAR) before continuing in the loop…
It can easily be done with a few shell commands and sed. Assume you have a template file jobNameYEAR.sl as shown, which literally contains jobNameYEAR and YEAR as the parameters. Then we can substitute YEAR with each given year in the loop, e. g.
seq 2000 2002|while read year
do <jobNameYEAR.sl sed s/YEAR$/$year/ >jobName$year.sl
sjob jobName$year.sl
done
If your years aren't in sequence, we can use e. g. echo 1962 1965 1970 instead of seq ….
Other variants are on Linux also possible, like for year in {2000..2002} instead of seq 2000 2002|while read year, and using envsubst instead of sed.

How to determine job array size for lots of jobs?

What is the best way to process lots of files in parallel via Slurm?
I have a lot of files (let's say 10000) in a folder (Each files get 10 secs or so to process). I want to determine sbatch job array size as 1000, naturally. (#SBATCH --array=1-10000%100) But it seems I can't determine more than some numbers(probably 1k). How you handle job array numbers? It seems to me because of my process don't take too much time, I think i should determine one job NOT for one file but for multiple files, right?
Thank you
If the process time is 10 second you should consider packing the tasks in a single job, both because such short jobs take longer to schedule than to run and because there is a limit on the number of jobs in an array.
Your submission script could look like this:
#!/bin/bash
#SBATCH --ntasks=16 # or any other number depending on the size of the cluster and the maximum allowed wall time
#SBATCH --mem-per-cpu=...
#SBATCH --time=... # based on the number of files and number of tasks
find . -name file_pattern -print0 | xargs -I{} -0 -P $SLURM_NTASKS srun -n1 -c1 --exclusive name_of_the_program {}
Make sure to replace all the ... and file_pattern and name_of_the_program with appropriate values.
The script will look for all files matching file_pattern in the submission directory and run the name_of_the_program program on it, limiting the number of concurrent instantes to the number of CPUs (more precisely number of tasks) requested. Note the use of --exclusive here which is specific for this use case and is deprecated with --exact in recent Slurm versions.

How to time a SLURM job array?

I am submitting a SLURM job array and want to have the total runtime (i.e. not the runtime of each task) printed to the log.
This is what I tried:
#!/bin/bash
#SBATCH --job-name=step1
#SBATCH --output=logs/step1.log
#SBATCH --error=logs/step1.log
#SBATCH --array=0-263%75
start=$SECONDS
python worker.py ${SLURM_ARRAY_TASK_ID}
echo "Completed step1 in $SECONDS seconds"
What I get in step1.log is something like this:
Completed step1 in 42 seconds
Completed step1 in 94 seconds
Completed step1 in 88 seconds
...
which appear to be giving the runtimes for the last group of tasks in the array. I want a single timer for the whole array, from submission to the end of the last task. Is that possible?
With job arrays, each task is an identical submission of your script, so the way you're measuring time will necessarily only be per-task, as you're seeing. To get the overall elapsed time of the entire jobarray, you'll need to get the submit time of the first task and subtract it from the end time of the last task.
e.g.
# get submit time for first task in array
sacct -j <job_id>_0 --format=submit
# get end time for last task in array
sacct -j <job_id>_263 --format=end
Then use date -d <timestamp from sacct> +%s to convert the timestamps to seconds since the epoch, to make them easier to subtract.
Also note that each of your 264 tasks will overwrite step1.log with its own output. I would typically use #SBATCH --output=step1-%A_%a.out to distinguish outputs from different tasks.

Slurm does not allocate the resources and keeps waiting

I'm trying to use our cluster but I have issues. I tried allocating some resources with:
salloc -N 1 --ntasks-per-node=5 bash
but It keeps wainting on:
salloc: Pending job allocation ...
salloc: job ... queued and waiting for resources
or when I try:
srun -N1 -l echo test
it lingers at waiting queue!
Am I making a mistake or there is something wrong with our cluster?
It might help to set a time limit for the Slurm job using the option --time, for instance set a limit of 10 minutes like this:
srun --job-name="myJob" --ntasks=4 --nodes=2 --time=00:10:00 --label echo test
Without time limit, Slurm will use the partition's default time limit. The issue is that sometimes this might be set to infinity or to several days, so this might cause a delay in the start of the job. To check the partition's default time limit use:
$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
prod* up infinite 198 ....
gpu* up 4-00:00:00 70 ....
From the Slurm docs:
-t, --time=<time>
Set a limit on the total run time of the job allocation. If the requested time limit exceeds the partition's time limit, the job will be left in a PENDING state (possibly indefinitely). The default time limit is the partition's default time limit. When the time limit is reached, each task in each job step is sent SIGTERM followed by SIGKILL.

Resources