Does qsub pass command line arguments to my script? - pbs

When I submit a job using
qsub script.sh
is $# setted to some value inside script.sh? That is, are there any command line arguments passed to script.sh?

You can pass arguments to the job script using the -F option of qsub:
qsub script.sh -F "args to script"
or inside script.sh:
#PBS -F arguments
This is documented here.

On my platform the -F is not available. As a substitute -v helped:
qsub -v "var=value" script.csh
And then use the variable var in your script.
See also the documentation.

No. Just tried to submit a script with arguments before I answered and qsub won't accept it.
This won't be as convenient as putting arguments on the command line but you could possibly set some environment variables which you can have Torque export to the job with -v [var name} or -V.

Related

Executing `sh -c` in a bash script

I have a test.sh file which takes as a parameter a bash command, it does some logic, i.e. setting and checking some env vars, and then executes that input command.
#!/bin/bash
#Some other logic here
echo "Run command: $#"
eval "$#"
When I run it, here's the output
% ./test.sh echo "ok"
Run command: echo ok
ok
But the issue is, when I pass something like sh -c 'echo "ok"', I don't get the output.
% ./test.sh sh -c 'echo "ok"'
Run command: sh -c echo "ok"
%
So I tried changing eval with exec, tried to execute $# directly (without eval or exec), even tried to execute it and save the output to a variable, still no use.
Is there any way to run the passed command in this format and get the ourput?
Use case:
The script is used as an entrypoint for the docker container, it receives the parameters from docker CMD and executes those to run the container.
As a quickfix I can remove the sh -c and pass the command without it, but I want to make the script reusable and not to change the commands.
TL;DR:
This is a typical use case (perform some business logic in a Docker entrypoint script before running a compound command, given at command line) and the recommended last line of the script is:
exec "$#"
Details
To further explain this line, some remarks and hyperlinks:
As per the Bash user manual, exec is a POSIX shell builtin that replaces the shell [with the command supplied] without creating a new process.
As a result, using exec like this in a Docker entrypoint context is important because it ensures that the CMD program that is executed will still have PID 1 and can directly handle signals, including that of docker stop (see also that other SO answer: Speed up docker-compose shutdown).
The double quotes ("$#") are also important to avoid word splitting (namely, ensure that each positional argument is passed as is, even if it contains spaces). See e.g.:
#!/usr/bin/env bash
printargs () { for arg; do echo "$arg"; done; }
test0 () {
echo "test0:"
printargs $#
}
test1 () {
echo "test1:"
printargs "$#"
}
test0 /bin/sh -c 'echo "ok"'
echo
test1 /bin/sh -c 'echo "ok"'
test0:
/bin/sh
-c
echo
"ok"
test1:
/bin/sh
-c
echo "ok"
Finally eval is a powerful bash builtin that is (1) unneeded for your use case, (2) and actually not advised to use in general, in particular for security reasons. E.g., if the string argument of eval relies on some user-provided input… For details on this issue, see e.g. https://mywiki.wooledge.org/BashFAQ/048 (which recaps the few situations where one would like to use this builtin, typically, the command eval "$(ssh-agent -s)").

cmake add_custom_target with arguments

I would like to create a custom target in cmake that calls a bash script with its arguments ie
make foo , where arguments could be multiple as -t -v -p
would call
source foo.sh -t -v -p
I am able to correctly call the script and run it successfully but it is unclear what is the best way to pass to it the arguments verbatim. Any hints?
I am currently using
add_custom_target( foo COMMAND bash -c "source foo.sh" )

sudo command behaviour with quotes

I need your help in understanding this behaviour of sudo.
sudo -s -- 'ls -l' this command works but sudo 'ls -l' throws error saying
sudo: ls -l: command not found I realize it treats the entire string within quote as single command (including the spaces) but what I don't get is how does it work fine with -s flag but fails when -s is not there.
Without -s, the first argument is the name of the command to execute. With -s, the first argument is a string passed to the -c option of whatever shell ($SHELL or your system shell) is used to execute the argument.
That is, assuming $SHELL is sh, the following are equivalent:
sudo -s -- 'ls -l'
sudo -- sh -c 'ls -l'
From the sudo man page:
-s [command]
The -s (shell) option runs the shell specified by the SHELL environment variable if it is set or the shell as specified in
the password database. If a command is specified, it is passed to the
shell for execution via the shell's -c option. If no command is
specified, an interactive shell is executed.
It behaves like it does because a new shell is spawned which breaks up the words in your "quoted command" like shells do.

How to pass an argument to a job and keep it unchanged in parallel fashion

I am trying to execute a series of job in different directories. I want to pass the directory as an input argument to the job. So far I understood that I can use environmental variable as a way to send argument to the jobs. But the problem is since jobs run in parallel fashion, the last value of this variable will be used for all jobs. let look at my code :
for i in "${arr[#]}"
do
export dir=$i
qsub myBashFile.sh
done
and in my job I used the variable dir to do some operation. I want each job execute with its own input parameter.
Edit: this is my job
#!/bin/sh
#
#
#PBS -N Brownie
#PBS -o test.output.txt
#PBS -e test.error.txt
#PBS -l walltime=2:00:00
#PBS -m n
#PBS -V dir
cd $dir
./run_mycode.sh
I know this is not correct, but i am looking for an alternative way to keep the value of dir unchanged and unique for all jobs independently.
I also tried to modify a variable in job file with sed command like below:
sed "s/dir/"'$i'"/g" my_job.sh > alljobs/my_jobNew.sh
but instead of putting the actual value of $i, dir changes exactly to $i which is meaningless in my_job.sh.
Have you tried passing the directory as command_args as explained in the manpage qsub(1)? That would be:
for i in "${arr[#]}"
do
qsub myBashFile.sh -- "$i"
done
You should be able to access it as $1 inside myBashFile.sh.
I would use $PBS_O_WORKDIR for this. Change your submission script to this:
for i in "${arr[#]}"
do
cd /path/to/$i
qsub /path/to/myBashFile.sh
done
In your job you would then change 'cd $dir' to 'cd $PBS_O_WORKDIR'.

Error while using -N option with qsub

I tried to use qsub -N "compile-$*" in Makefile and it gives the following error
because $* equals to "compile-obj/linux/flow" in this case.
qsub: ERROR! argument to -N option must not contain /
The whole command which I am using is:-
qsub -P bnormal -N "compile-obj/linux/flow" -cwd -now no -b y -l cputype=amd64 -sync yes -S /bin/sh -e /remote//qsub_files/ -o /remote/qsub_files/
Any idea how to include slash in naming while running qsub?
Thanks
I'm not familiar with qsub, but make just executes what command you supply it. So I suspect you constructed illegal qsub command.
Maybe Automatic-Variables section of GNU make can help you too.
Adding a whole rule to question can help.
I resolved the problem by manipulating the name passed to -N option by replacing / with -. It works for me. Thanks.

Resources