Best way to automatically create different process names for qsub - qsub

I am running my program on a high-performance computer, usually with different parameters as input. Those parameters are given to the program via a parameter file, i.e. the qsub-file looks like
#!/bin/bash
#PBS -N <job-name>
#PBS -A <name>
#PBS -l select=1:ncpus=20:mpiprocs=20
#PBS -l walltime=80:00:00
#PBS -M <mail-address>
#PBS -m bea
module load foss
cd $PBS_O_WORKDIR
mpirun main parameters.prm
# Append the job statistics to the std out file
qstat -f $PBS_JOBID
Now usually I run the same program multiple times more or less at the same time, with different parameter.prm-files. Nevertheless they all show up in the job-list with the same name, making the correlation between the job in the list and the used parameters difficult (not impossible).
Is there a way to change the name of the program in the job list dynamically, depending on the used input parameters (ideally from within main)? Or is there another way to change the job name without having to edit the job-file every time I run
qsub job_script.pbs
?
Would be a solution to create a shell script which reads data from the parameter file, and then in turn creates the job-script and runs it? Or are there easier ways?

Simply use the -N option on the command line:
qsub -N job1 job_script.pbs
You can then use a for loop to iterate over the *.prm files:
for prm in *.prm
do
prmbase=$(basename $prm .prm)
qsub -N $prmbase main $prm
done
This will name each job by the parameter file name, sans the .prm suffix.

Related

Calling Matlab on a linux based Cluster: matlab sessions stops before m file is completly executed

i'm running a bash script that submits some pbs Jobs on a Linux based Cluster multiple times. each Submission calls Matlab, reads some data, performs calculations, and writes the results back to my Directory.
This process works fine without one exception. For some calculations the m-file starts, loads everything, than performs the calculation, but while printing the results to the stdout the Job terminates.
the log file of pbs Shows no error Messages, matlab Shows no error Messages.
the code runs perfectly on my Computer. I am out of ideas.
if anyone would have an idea what i could do, i would appreciate it.
thanks in advance
jbug
edit:
is there a possibility to force matlab to reach the end of file? may that help?
edit #18:00:
as requested in the comment below by HBHB here is the comment that Shows how matlab is called by an external *.sh file
#PBS -l nodes=1:ppn=2
#PBS -l pmem=1gb
#PBS -l mem=1gb
#PBS -l walltime=00:05:00
module load MATLAB/R2015b
cd $PBS_O_WORKDIR
matlab -nosplash -nodisplay -nojvm -r "addpath('./data/calc');myFunc("$a","$b"),quit()"
Where $a and $b Comes from a Loop within the caller bash file and ./data/calc Points to the Directory where myFunction is located
edit #18:34: if i perform the calculation manually than everything runs fine. so the given data is fine and seems to narrow down to pbs?
edit #21:27 i put an until Loop around the matlab call that checks if matlab Returns the desired data. if not, it should restart matlab again after some delay. but still. matlab stops after finished calulation while printing the result(some matrices) and even the Job finishes. the checking part of the restart will never be reached.
what i don't understand. the Job stays in the Queue, like i planned it with the small delay. so the sleep$w will be executed? but if I check the error files, it just shows me the frozen matlab in its first round, recognizable by i. here is that part of code. maybe you can help me
#w=w wait
i=1
until [[ -e ./temp/$b/As$a && -e ./temp/$b/Bs$a && -e ./temp/$b/Cs$a && -e ./temp/$b/lamb$a ]]
do
echo $i
matlab -nosplash -nodisplay -nojvm -r "addpath('./data/calc');myFunc("$a","$b"),quit()"
sleep $w
((i=i+1))
done
You are most likely choking your matlab process with limited memory. Your PBS file:
#PBS -l nodes=1:ppn=2
#PBS -l pmem=1gb
#PBS -l mem=1gb
#PBS -l walltime=00:05:00
You are setting your physical memory to 1gb. Matlab without any files runs around 900MB of virtual memory. Try:
#PBS -l nodes=1:ppn=1
#PBS -l pvmem=5gb
#PBS -l walltime=00:05:00
Additionally, this is something you should contact your local system administrator for. Without system logs, I can't tell you for sure why your job is cutting short (but my guess is resource limits). As an SA of an HPC center, I can tell you that they would be able to tell you in about 5 minutes why your job is not working correctly. Additionally, different HPC centers utilize different PBS configurations. So mem might not even be recognized; this is something your local adminstrators can help you with much better then StackOverflow.

Redirect output of my java program under qsub

I am currently running multiple Java executable program using qsub.
I wrote two scripts: 1) qsub.sh, 2) run.sh
qsub.sh
#! /bin/bash
echo cd `pwd` \; "$#" | qsub
run.sh
#! /bin/bash
for param in 1 2 3
do
./qsub.sh java -jar myProgram.jar -param ${param}
done
Given the two scripts above, I submit jobs by
sh run.sh
I want to redirect the messages generated by myProgram.jar -param ${param}
So in run.sh, I replaced the 4th line with the following
./qsub.sh java -jar myProgram.jar -param ${param} > output-${param}.txt
but the messages stored in output.txt is "Your job 730 ("STDIN") has been submitted", which is not what I intended.
I know that qsub has an option -o for specifying the location of output, but I cannot figure out how to use this option for my case.
Can anyone help me?
Thanks in advance.
The issue is that qsub doesn't return the output of your job, it returns the output of the qsub command itself, which is simply informing your resource manager / scheduler that you want that job to run.
You want to use the qsub -o option, but you need to remember that the output won't appear there until the job has run to completion. For Torque, you'd use qstat to check the status of your job, and all other resource managers / schedulers have similar commands.

Torque nested/successive qsub call

I have a jobscript compile.pbs which runs on a single CPU and compiles source code to create an executable. I then have a 2nd job script jobscript.pbs which I call using 32 CPU's to run that newly created executable with MPI. They both work perfectly when I manually call them in succession, but I would like to automate the process by having the first script call the 2nd jobscript just before it ends. Is there a way to properly nest qsub calls or have them be called in succession?
Currently my attempt is to have the first script call the 2nd script right before it ends, but when I try that I get a strange error message from the 2nd (nested) qsub:
qsub: Bad UID for job execution MSG=ruserok failed validating masterhd/masterhd from s59-16.local
I think the 2nd script is being called properly, but maybe the permissions are not the same as when I called the original one. Obviously my user name masterhd is allowed to run the jobscripts because it works fine when I call the jobscript manually. Is there a way to accomplish what I am trying to do?
Here is a more detailed example of the procedure. First I call the first jobscript and specify a variable with -v:
qsub -v outpath='/home/dest_folder/' compile.pbs
That outpath variable just specifies where to copy the new executable, and then the 2nd jobscript changes to that output directory and attempts to run jobscript.pbs.
compile.pbs:
#!/bin/bash
#PBS -N compile
#PBS -l walltime=0:05:00
#PBS -j oe
#PBS -o ocompile.txt
#Perform compiling stuff:
module load gcc-openmpi-1.2.7
rm *.o
make -f Makefile
#Copy the executable to the destination:
cp visct ${outpath}/visct
#Change to the output path before calling the next jobscript:
cd ${outpath}
qsub jobscript
jobscript.pbs:
#!/bin/bash
#PBS -N run_exe
#PBS -l nodes=32
#PBS -l walltime=96:00:00
#PBS -j oe
#PBS -o results.txt
cd $PBS_O_WORKDIR
module load gcc-openmpi-1.2.7
time mpiexec visct
You could make a submitting script that qsubs both jobs, but makes the second execute only if, and after, the first was completed without errors:
JOB1CMD="qsub -v outpath='/home/dest_folder/' compile.pbs -t" # -t for terse output
JOB1OUT=$(eval $JOB1CMD)
JOB1ID=${JOB1OUT%%.*} # parse to get job id, change accordingly
JOB2CMD="qsub jobscript.pbs -W depend=afterok:$JOB1ID"
eval $JOB2CMD
It's possible that there are restrictions on your system to run scripts inside scripts. Your first job only runs for 5 minutes and then the second job needs 96 hours. If the second job is requested inside the first job, that would violate the time limit of the first job.
Why can't you just put the compile part at the beginning of the second script?

can i delete a shell script after it has been submitted using qsub without affecting the job?

I want to submit a a bunch of jobs using qsub - the jobs are all very similar. I have a script that has a loop, and in each instance it rewrites over a file tmpjob.sh and then does qsub tmpjob.sh . Before the job has had a chance to run, the tmpjob.sh may have been overwritten by the next instance of the loop. Is another copy of tmpjob.sh stored while the job is waiting to run? Or do I need to be careful not to change tmpjob.sh before the job has begun?
Assuming you're talking about torque, then yes; torque reads in the script at submission time. In fact the submission script need never exist as a file at all; as given as an example in the documentation for torque, you can pipe in commands to qsub (from the docs: cat pbs.cmd | qsub.)
But several other batch systems (SGE/OGE, PBS PRO) use qsub as a queue submission command, so you'll have to tell us what queuing system you're using to be sure.
Yes. You can even create jobs and sub-jobs with HERE Documents. Below is an example of a test I was doing with a script initiated by a cron job:
#!/bin/env bash
printenv
qsub -N testCron -l nodes=1:vortex:compute -l walltime=1:00:00 <<QSUB
cd \$PBS_O_WORKDIR
printenv
qsub -N testsubCron -l nodes=1:vortex:compute -l walltime=1:00:00 <<QSUBNEST
cd \$PBS_O_WORKDIR
pwd
date -Isec
QSUBNEST
QSUB

R programming - submitting jobs on a multiple node linux cluster using PBS

I am running R on a multiple node Linux cluster. I would like to run my analysis on R using scripts or batch mode without using parallel computing software such as MPI or snow.
I know this can be done by dividing the input data such that each node runs different parts of the data.
My question is how do I go about this exactly? I am not sure how I should code my scripts. An example would be very helpful!
I have been running my scripts so far using PBS but it only seems to run on one node as R is a single thread program. Hence, I need to figure out how to adjust my code so it distributes labor to all of the nodes.
Here is what I have been doing so far:
1) command line:
> qsub myjobs.pbs
2) myjobs.pbs:
> #!/bin/sh
> #PBS -l nodes=6:ppn=2
> #PBS -l walltime=00:05:00
> #PBS -l arch=x86_64
>
> pbsdsh -v $PBS_O_WORKDIR/myscript.sh
3) myscript.sh:
#!/bin/sh
cd $PBS_O_WORKDIR
R CMD BATCH --no-save my_script.R
4) my_script.R:
> library(survival)
> ...
> write.table(test,"TESTER.csv",
> sep=",", row.names=F, quote=F)
Any suggestions will be appreciated! Thank you!
-CC
This is rather a PBS question; I usually make an R script (with Rscript path after #!) and make it gather a parameter (using commandArgs function) that controls which "part of the job" this current instance should make. Because I use multicore a lot I usually have to use only 3-4 nodes, so I just submit few jobs calling this R script with each of a possible control argument values.
On the other hand your use of pbsdsh should do its job... Then the value of PBS_TASKNUM can be used as a control parameter.
This was an answer to a related question - but it's an answer to the comment above (as well).
For most of our work we do run multiple R sessions in parallel using qsub (instead).
If it is for multiple files I normally do:
while read infile rest
do
qsub -v infile=$infile call_r.pbs
done < list_of_infiles.txt
call_r.pbs:
...
R --vanilla -f analyse_file.R $infile
...
analyse_file.R:
args <- commandArgs()
infile=args[5]
outfile=paste(infile,".out",sep="")...
Then I combine all the output afterwards...
This problem seems very well suited for use of GNU parallel. GNU parallel has an excellent tutorial here. I'm not familiar with pbsdsh, and I'm new to HPC, but to me it looks like pbsdsh serves a similar purpose as GNU parallel. I'm also not familiar with launching R from the command line with arguments, but here is my guess at how your PBS file would look:
#!/bin/sh
#PBS -l nodes=6:ppn=2
#PBS -l walltime=00:05:00
#PBS -l arch=x86_64
...
parallel -j2 --env $PBS_O_WORKDIR --sshloginfile $PBS_NODEFILE \
Rscript myscript.R {} :::: infilelist.txt
where infilelist.txt lists the data files you want to process, e.g.:
inputdata01.dat
inputdata02.dat
...
inputdata12.dat
Your myscript.R would access the command line argument to load and process the specified input file.
My main purpose with this answer is to point out the availability of GNU parallel, which came about after the original question was posted. Hopefully someone else can provide a more tangible example. Also, I am still wobbly with my usage of parallel, for example, I'm unsure of the -j2 option. (See my related question.)

Resources