I want to add a string to the PATH environment variable of a qsub command. Editing the .sh file is not an option because this is generated and run within the pipeline (I can edit the qsub command options).
I am running the following qsub command:
qsub -pe slots 16 -S /bin/bash -cwd -N "pipeline" -j y -o
/home/user/log/out.log /home/user/pipeline/runthis.sh
I need to add the following PATH environment variable to the shell being run:
/home/user/jre1.8.0_66/bin
(this is because the script depends on a more recent version of java than is on the cluster).
I tried the following:
qsub -pe slots 16 -S /bin/bash -cwd -N "pipeline" -j y -o
/home/user/log/out.log /home/user/pipeline/runthis.sh -v
PATH=/home/pa354/jre1.8.0_66/bin:$PATH -V
This hasn't worked, I added 'env' to the bash file being run (to check the environment variables), my required path has not been added.
You need to use -v in your qsub command:
qsub -v JAVA_HOME ...
This will pass along the environment variable from the calling environment into the spawned job.
Note that the -v argument to qsub must come before the arguments to the actual command you are running on the remote nodes. You seem to have tried it at the end of the entire command line which isn't going to work.
Related
I have a script which starts like this:
#!/bin/bash
echo "Running on OSTYPE: '$OSTYPE'"
DISTRO=""
CODENAME=""
SUDO=$(command -v sudo 2>/dev/null)
echo foo
When I run it as-is or with bash -x, it stops after the assignment to SUDO, I get as only output
+ DISTRO=
+ CODENAME=
++ command -v sudo
+ SUDO=
When I add -v to the bash invocation to get even more verbose output, the script runs normally and I see my "foo". I'm using bash 4.4.23 as shipped on https://git-scm.com/downloads
What is going wrong on my system and how can I debug this?
This happens because $(command ...) is a subshell.
Instead of set -x or bash -x create a file like follow inside your home folder:
.bash_env
set -x
So the set -x will be applied to any new bash instance and subinstance.
I do this in a script:
read direc <<< $(basename `pwd`)
and I get:
Syntax error: redirection unexpected
in an ubuntu machine
/bin/bash --version
GNU bash, version 4.0.33(1)-release (x86_64-pc-linux-gnu)
while I do not get this error in another suse machine:
/bin/bash --version
GNU bash, version 3.2.39(1)-release (x86_64-suse-linux-gnu)
Copyright (C) 2007 Free Software Foundation, Inc.
Why the error?
Does your script reference /bin/bash or /bin/sh in its hash bang line? The default system shell in Ubuntu is dash, not bash, so if you have #!/bin/sh then your script will be using a different shell than you expect. Dash does not have the <<< redirection operator.
Make sure the shebang line is:
#!/bin/bash
or
#!/usr/bin/env bash
And run the script with:
$ ./script.sh
Do not run it with an explicit sh as that will ignore the shebang:
$ sh ./script.sh # Don't do this!
If you're using the following to run your script:
sudo sh ./script.sh
Then you'll want to use the following instead:
sudo bash ./script.sh
The reason for this is that Bash is not the default shell for Ubuntu. So, if you use "sh" then it will just use the default shell; which is actually Dash. This will happen regardless if you have #!/bin/bash at the top of your script. As a result, you will need to explicitly specify to use bash as shown above, and your script should run at expected.
Dash doesn't support redirects the same as Bash.
Docker:
I was getting this problem from my Dockerfile as I had:
RUN bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)
However, according to this issue, it was solved:
The exec form makes it possible to avoid shell string munging, and
to RUN commands using a base image that does not contain /bin/sh.
Note
To use a different shell, other than /bin/sh, use the exec form
passing in the desired shell. For example,
RUN ["/bin/bash", "-c", "echo hello"]
Solution:
RUN ["/bin/bash", "-c", "bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)"]
Notice the quotes around each parameter.
You can get the output of that command and put it in a variable. then use heredoc. for example:
nc -l -p 80 <<< "tested like a charm";
can be written like:
nc -l -p 80 <<EOF
tested like a charm
EOF
and like this (this is what you want):
text="tested like a charm"
nc -l -p 80 <<EOF
$text
EOF
Practical example in busybox under docker container:
kasra#ubuntu:~$ docker run --rm -it busybox
/ # nc -l -p 80 <<< "tested like a charm";
sh: syntax error: unexpected redirection
/ # nc -l -p 80 <<EOL
> tested like a charm
> EOL
^Cpunt! => socket listening, no errors. ^Cpunt! is result of CTRL+C signal.
/ # text="tested like a charm"
/ # nc -l -p 80 <<EOF
> $text
> EOF
^Cpunt!
do it the simpler way,
direc=$(basename `pwd`)
Or use the shell
$ direc=${PWD##*/}
Another reason to the error may be if you are running a cron job that updates a subversion working copy and then has attempted to run a versioned script that was in a conflicted state after the update...
On my machine, if I run a script directly, the default is bash.
If I run it with sudo, the default is sh.
That’s why I was hitting this problem when I used sudo.
In my case error is because i have put ">>" twice
mongodump --db=$DB_NAME --collection=$col --out=$BACKUP_LOCATION/$DB_NAME-$BACKUP_DATE >> >> $LOG_PATH
i just correct it as
mongodump --db=$DB_NAME --collection=$col --out=$BACKUP_LOCATION/$DB_NAME-$BACKUP_DATE >> $LOG_PATH
Before running the script, you should check first line of the shell script for the interpreter.
Eg:
if scripts starts with /bin/bash , run the script using the below command
"bash script_name.sh"
if script starts with /bin/sh, run the script using the below command
"sh script_name.sh"
./sample.sh - This will detect the interpreter from the first line of the script and run.
Different Linux distributions having different shells as default.
I have a script that I am trying to submit to a SGE cluster (on Redhat Linux). The very first part of the script defines the current folder from the full CWD path, as a variable to use downstream:
#!/usr/bin/bash
#
#$ -cwd
#$ -A username
#$ -M user#server
#$ -j y
#$ -m aes
#$ -N test
#$ -o test.log.txt
echo 'This is a test.'
result="${PWD##*/}"
echo $result
In bash, this works as expected:
CWD:
-bash-4.1$ pwd
/home/user/test
Run script:
-bash-4.1$ bash test.sh
This is a test.
test
When I submit the job to the cluster:
-bash-4.1$ qsub -V test.sh
and examine the log file:
This is a test.
Missing }.
Does anyone know why the job submission is saying "Missing } " when it works right from the command-line? I'm not sure what I'm missing here.
Thanks.
The posix standard for batch schedulers requires them to ignore the #! line and instead use either a shell configured into the cluster or one selected by the -S option of qsub. The default is usually csh. So adding something like #$ -S /usr/bin/bash into the script will cause it to be interpreted by bash.
Alternatively you could convince the cluster admin to change the queues to unix_behavior from posix_compliant.
I have a complex qsub command to run remotely.
PROJECT_NAME_TEXT="TEST PROJECT"
PACK_ORGANIZATION="--source-organization \'MY, ORGANIZATION\'"
CONTACT_NAME="--contact-name \'Tom Riddle\'"
PROJECT_NAME_PACK="--project-name \"${PROJECT_NAME_TEXT}\""
INPUTARGS="${PACK_ORGANIZATION} ${CONTACT_NAME} ${PROJECT_NAME_PACK}"
ssh mycluster "qsub -v argv="$INPUTARGS" -l walltime=10:00:00 -l vmem=8GB -l nodes=1:ppn=4 /myscript_path/run.script"
The problem is the remote cluster doesn't recognise the qsub command, it always showing incorrect qsub command or simply alway queued on cluster because of input args are wrong.
It must be the escaping problem, my question is how to escape the command above properly ?
Try doing this using a here-doc : you have a quote conflict (nested double quotes that is an error):
#!/bin/bash
PROJECT_NAME_TEXT="TEST PROJECT"
PACK_ORGANIZATION="--source-organization \'MY, ORGANIZATION\'"
CONTACT_NAME="--contact-name \'Tom Riddle\'"
PROJECT_NAME_PACK="--project-name \"${PROJECT_NAME_TEXT}\""
INPUTARGS="${PACK_ORGANIZATION} ${CONTACT_NAME} ${PROJECT_NAME_PACK}"
ssh mycluster <<EOF
qsub -v argv="$INPUTARGS" -l walltime=10:00:00 -l vmem=8GB -l nodes=1:ppn=4 /myscript_path/run.script
EOF
As you can see, here-docs are really helpful for inputs with quotes.
See man bash | less +/'Here Documents'
Edit
from your comments :
I used this method but it gives me "Pseudo-terminal will not be allocated because stdin is not a terminal."
You can ignore this warning with
ssh mycluster <<EOF 2>/dev/null
(try the -t switch for ssh if needed)
If you have
-bash: line 2: EOF: command not found
I think you have a copy paste problem. Try to remove extra spaces on all end lines
And it seems this method cannot pass local variable $INPUTARGS to the remote cluster
it seems related to your EOF problem.
$argv returns nothing on remote cluster
What does this means ? $argv is not a pre-defined variable in bash. If you need to list command line arguments, use the pre-defined variable $#
Last thing : ensure you are using bash
Your problem is not the length, but the nesting of your quotes - in this line, you are trying to use " inside ", which won't work:
ssh mycluster "qsub -v argv="$INPUTARGS" -l walltime=10:00:00 -l vmem=8GB -l nodes=1:ppn=4 /myscript_path/run.script"
Bash will see this as "qsub -v argv=" followed by $INPUTARGS (not quoted), followed by " -l walltime=10:00:00 -l vmem=8GB -l nodes=1:ppn=4 /myscript_path/run.script".
It's possible that backslash-escaping those inner quotes will have the desired effect, but nesting quotes in bash can get rather confusing. What I often try to do is add an echo at the beginning of the command, to show how the various stages of expansion pan out. e.g.
echo 'As expanded locally:'
echo ssh mycluster "qsub -v argv=\"$INPUTARGS\" -l walltime=10:00:00 -l vmem=8GB -l nodes=1:ppn=4 /myscript_path/run.script"
echo 'As expanded remotely:'
ssh mycluster "echo qsub -v argv=\"$INPUTARGS\" -l walltime=10:00:00 -l vmem=8GB -l nodes=1:ppn=4 /myscript_path/run.script"
Thanks for all the answers, however their methods will not work on my case. I have to answer this by myself since this problem is pretty complex, I got the clue from existing solutions in stackoverflow.
There are 2 problems must be solved in my case.
Pass local program's parameters to the remote cluster. Here-doc solution doesn't work in this case.
Run qsub on remote cluster with the a long variable as arguments that contain quote symbol.
Problem 1.
Firstly, I have to introduce my script that runs on local machine takes parameters like this:
scripttoberunoncluster.py --source-organisation "My_organization_my_department" --project-name "MyProjectName" --processes 4 /targetoutputfolder/
The real parameter is far more longer than above, so all the parameter must be sent to remote. They are sent in file like this:
PROJECT_NAME="MyProjectName"
PACK_ORGANIZATION="--source-organization '\\\"My_organization_my_department\\\"'" # multiple layers of escaping, remove all the spaces
PROJECT_NAME_PACK="--project-name '\\\"${PROJECT_NAME}\\\"'"
PROCESSES_="--processes 4"
TARGET_FOLDER_PACK="/targetoutputfolder/"
INPUTARGS="${PACK_ORGANIZATION} ${PROJECT_NAME_PACK} ${PROCESSES} ${TARGET_FOLDER_PACK}"
echo $INPUTARGS > "TempPath/temp.par"
scp "TempPath/temp.par" "remotecluster:/remotepath/"
My solution is sort of compromising. But in this way the remote cluster can run script with arguments contain quote symbol. If you don't put all your variable (as parameters) in a file and transfer it to remote cluster, no matter how you pass them into variable, the quote symbol will be removed.
Problem 2.
Check how the qsub runs on remote cluster.
ssh remotecluster "qsub -v argv=\"`cat /remotepath/temp.par`\" -l walltime=10:00:00 /remotepath/my.script"
And in the my.script:
INPUT_ARGS=`echo $argv`
python "/pythonprogramlocation/scripttoberunoncluster.py" $INPUT_ARGS ; #note: $INPUT_ARGS hasn't quote
The described escaping problem consists in the requirement to preserve final quotes around arguments after two evaluation processes, i. e. after two evaluations we should see something like:
--source-organization "My_organization_my_department" --project-name "MyProjectName" --processes 4 /targetoutputfolder/
This can be achieved codewise by first putting each argument in a separate variable and then enclosing the argument with single quotes while making sure that possible single quotes inside the argument string get "escaped" with '\'' (in fact, the argument will be split up into separate strings but, when used, the split-up argument will automatically get re-concatenated by the string evaluation mechanism of UNIX (POSIX?) shells). And this procedure has to be repeated three times.
{
escsquote="'\''"
PROJECT_NAME="MyProjectName"
myorg="My_organization_my_department"
myorg="'${myorg//\'/${escsquote}}'" # bash
myorg="'${myorg//\'/${escsquote}}'"
myorg="'${myorg//\'/${escsquote}}'"
PACK_ORGANIZATION="--source-organization ${myorg}"
pnp="${PROJECT_NAME}"
pnp="'${pnp//\'/${escsquote}}'"
pnp="'${pnp//\'/${escsquote}}'"
pnp="'${pnp//\'/${escsquote}}'"
PROJECT_NAME_PACK="--project-name ${pnp}"
PROCESSES="--processes 4"
TARGET_FOLDER_PACK="/targetoutputfolder/"
INPUTARGS="${PACK_ORGANIZATION} ${PROJECT_NAME_PACK} ${PROCESSES} ${TARGET_FOLDER_PACK}"
echo "$INPUTARGS"
eval echo "$INPUTARGS"
eval eval echo "$INPUTARGS"
echo
ssh -T localhost <<EOF
echo qsub -v argv="$INPUTARGS" -l walltime=10:00:00 -l vmem=8GB -l nodes=1:ppn=4 /myscript_path/run.script
EOF
}
For further information please see:
Quotes exercise - how to do ssh inside ssh whilst running sql inside second ssh?
Quoting in ssh $host $FOO and ssh $host "sudo su user -c $FOO" type constructs.
i want to do something like:
schroot -c name -u root "export A=3 && export B=4"
but I get the error:
Failed to execute “export”: No such file or directory
In other words, I want to be able to programmatically execute shell commands inside the schroot environment. What is the right way to get this behavior?
I recommend:
schroot -c name -u root sh -c "export A=3 && export B=4"
or better:
schroot -c name -u root -- sh -c "export A=3 && export B=4"
This runs the shell with the '-c' option telling it (the shell) to read the following argument as the command (script) to be executed. The same technique works with other analogous commands: 'su', 'nohup', ...
The -- option terminates the arguments to schroot and ensures that any options on the rest of the command line are passed to and interpreted by the shell, not by schroot. This was suggested by SR_ in a comment, and the man page for schroot also suggests it should be used too (search for 'Separator'). The GNU getopt() function by default permutes arguments, which is not wanted here. The -- prevents it from permuting the arguments after the --.
schroot -c name -u root -- export A=3 && export B=4
Ensuring that /etc/schroot/schroot.conf has
run-exec-scripts=true
run-setup-scripts=true
You could try
schroot -c name -u root "/bin/bash -c 'export A=3; export B=4'"
but this is the first time i've heard of schroot. And the exports look like they're useless...even running the double-quoted stuff directly from the command line, it seems the child shell doesn't want to affect the parent's environment.