SGE qsub define variable using bach? - qsub

I am trying to automatically set up several variables for SGE system but get no luck.
#!/bin/bash
myname="test"
totaltask=10
#$ -N $myname
#$ -cwd
#$ -t 1-$totaltask
apparently $myname will not be recognized. any solution?
thanks a lot

Consider making a wrapper function
qsub_file.sh
#!/bin/bash
#$ -V
#$ -cwd
wrapper_script.sh
#!/bin/bash
myname="test"
totaltask=10
qsub qsub_script.sh -N ${myname} -t 1-${totaltask}

Related

Bash unable to assign variable

#!/usr/bin/env bash
#SBATCH --partition=standard
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=20
#SBATCH --mem=100G
USEAGE="metascript.sh <wd> <wd1>"
source ~/anaconda2/etc/profile.d/conda.sh
conda activate assembly
wd=$1
wd1=$2
cd $wd
cd $wd1
for f in SRR*/ ; do
[[ -e $f ]] || continue
SRR=${f::-1}
cd ../..
jdid=$(sbatch -J FirstQC_$SRR ./pipelines/preprocessingbowtietrinity/FirstFastqc.sh $wd $wd1 $SRR)
#echo ${jdid[0]}|grep -o '[0-9]\+'
jobid=${jdid[0]}
jobid1=${jobid[0]}|grep -o '[0-9]\+'
#echo $jobid1
Hi all just having issues with my bash scripting, so I can print the line ${jdid[0]}|grep -o '[0-9]+' however when I assign it to a variable it is unable to return anything.
If the idea is to extract just the job ID from the output of sbatch, you can also use sbatch's --parsable argument. See here in the documentation.
jdid=$(sbatch --parsable -J FirstQC_$SRR ./pipelines/preprocessingbowtietrinity/FirstFastqc.sh $wd $wd1 $SRR)
and jdij will only contain the job ID if the cluster is not part of a federation.
jobid1=${jobid[0]}|grep -o '[0-9]\+'
I can print the line ${jdid[0]}|grep -o '[0-9]+' however when I assign it to a variable it is unable to return anything.
In order to assign the output of a pipeline to a variable or insert it into a command line for any other purpose, you have Command Substitution at hand:
jobid1=`echo ${jobid[0]}|grep -o '[0-9]\+'`
Of course, with bash this is better written as:
jobid1=`<<<${jobid[0]} grep -o '[0-9]\+'`
If the issue is printing the line ${jdid[0]}|grep -o '[0-9]+' as your question.
Just put the line in double quotation marks and it will work out.
Here is a little test i made:
jobid1="{jobid[0]}|grep -o '[0-9]\+'"
echo $jobid1
the out put is {jobid[0]}|grep -o '[0-9]\+'

How to save the job results to the same folder as the job.sh file in cluster?

To run a job in cluster, I need to submit a job.sh file, in which one of the parameters to set is the working directory #$ -wd /path/to/save/the/result/.
I have different job.sh files in different directories. All the job.sh are almost identical except I have to change the /path/to/save/the/result/ to the corresponding directory, where each job.sh locates, so that the results will be saved in the same place as each job.sh.
As I have many job.sh files at different directories, it takes a lot of time to specifically define the /path/to/save/the/result/.
If I use #$ -cwd, the results will be saved where I launch the job.sh file, which is also not good. I have to use cd /path/to/save/the/result/, then qsub job.sh every time.
So is there a way that I can replace the different /path/to/save/the/result/ with a same variable, which always points to the directory of the job.sh file?
An example of my current job.sh is:
#!/bin/bash -l
#$ -S /bin/bash
#$ -l h_rt=00:30:0
#$ -l mem=2G
#$ -l tmpfs=15G
#$ -N md_0_1
#$ -pe mpi 4
#$ -wd /path/to/save/the/result/
module unload compilers mpi
module load compilers/intel/2015/update2
module load mpi/intel/2015/update3/intel
module load gromacs/5.1.1/intel-2015-update2
gmx mdrun -deffnm md_0_1
I typically use current_directory="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" which gets the directory of the location the script is.
Then you can write to $current_directory/output.csv or whatever your output file should be.
You can use dirname $0 to get the location of the script job.sh.
EDIT:
For OP's job.sh, one can group the command and write the output to logfile, something like this:
$ cat job.sh
#!/bin/bash
scriptDir="$(dirname $0)"
# Group the commands and send the output to a logfile
{
module unload compilers mpi
module load compilers/intel/2015/update2
module load mpi/intel/2015/update3/intel
module load gromacs/5.1.1/intel-2015-update2
gmx mdrun -deffnm md_0_1
} &>> "${scriptDir}/logfilename.log"
Run the script:
$ qsub /path/to/job.sh
This should create ${scriptDir}/logfilename.log as long as the path is accessible to the cluster.

Shared Library Error in SGE qsub script

I am trying to run a parallel job on a cluster, but every time I do it gives me the error "error while loading shared libraries: liblammps.so: cannot open shared object file: No such file or directory". I know that I need to export the library path aka "export LD_LIBRARY_PATH=/path/to/library", and when I do that locally and then run the program everything is fine. It's only when I submit the job to the cluster that I run into any issues. My script looks like this
#!/bin/bash
LD_LIBRARY_PATH=/path/to/library
for i in 1 2 3
do
qsub -l h_rt=43200 -l mem=1G -pe single 1 -cwd ./Project-Serial.sh
qsub -l h_rt=43200 -l mem=1G -pe mpi-spread 2 -cwd ./Project-MPI.sh 2
qsub -l h_rt=28800 -l mem=1G -pe mpi-spread 4 -cwd ./Project-MPI.sh 4
qsub -l h_rt=19200 -l mem=1G -pe mpi-spread 8 -cwd ./Project-MPI.sh 8
qsub -l h_rt=12800 -l mem=1G -pe mpi-spread 16 -cwd ./Project-MPI.sh 16
qsub -l h_rt=8540 -l mem=1G -pe mpi-spread 32 -cwd ./Project-MPI.sh 32
done
I'm not sure whether I simply am setting the path in the wrong place? Or maybe there is some other way to use the library? Any help is appreciated.
You can use qsub -v Variable_List to pass specific environment variables, or qsub -V to pass the full environment. From there, you may need to use mpiexec -x if the sub-tasks rely on the library file (or -env for mpich).

Does qsub pass command line arguments to my script?

When I submit a job using
qsub script.sh
is $# setted to some value inside script.sh? That is, are there any command line arguments passed to script.sh?
You can pass arguments to the job script using the -F option of qsub:
qsub script.sh -F "args to script"
or inside script.sh:
#PBS -F arguments
This is documented here.
On my platform the -F is not available. As a substitute -v helped:
qsub -v "var=value" script.csh
And then use the variable var in your script.
See also the documentation.
No. Just tried to submit a script with arguments before I answered and qsub won't accept it.
This won't be as convenient as putting arguments on the command line but you could possibly set some environment variables which you can have Torque export to the job with -v [var name} or -V.

Error while using -N option with qsub

I tried to use qsub -N "compile-$*" in Makefile and it gives the following error
because $* equals to "compile-obj/linux/flow" in this case.
qsub: ERROR! argument to -N option must not contain /
The whole command which I am using is:-
qsub -P bnormal -N "compile-obj/linux/flow" -cwd -now no -b y -l cputype=amd64 -sync yes -S /bin/sh -e /remote//qsub_files/ -o /remote/qsub_files/
Any idea how to include slash in naming while running qsub?
Thanks
I'm not familiar with qsub, but make just executes what command you supply it. So I suspect you constructed illegal qsub command.
Maybe Automatic-Variables section of GNU make can help you too.
Adding a whole rule to question can help.
I resolved the problem by manipulating the name passed to -N option by replacing / with -. It works for me. Thanks.

Resources