I have a 1,700 lines query to be executed in Impala-shell. I created a shell script with below command:
impala-shell -V -i hostname -q "[QUERY]"
However, when I executed it using sh script.sh, I got the error message "Argument list too long". I am able to run simpler/short query using Impala-shell command.
I also tried to enlarge the limit by running command ulimit -s 65536 but I got the same error.
I suspect the number of lines of the query is too big.
-f option is the answer. I prepared a separate SQL file and it worked.
impala-shell -V -i hostname -f file.sql
Related
I do this in a script:
read direc <<< $(basename `pwd`)
and I get:
Syntax error: redirection unexpected
in an ubuntu machine
/bin/bash --version
GNU bash, version 4.0.33(1)-release (x86_64-pc-linux-gnu)
while I do not get this error in another suse machine:
/bin/bash --version
GNU bash, version 3.2.39(1)-release (x86_64-suse-linux-gnu)
Copyright (C) 2007 Free Software Foundation, Inc.
Why the error?
Does your script reference /bin/bash or /bin/sh in its hash bang line? The default system shell in Ubuntu is dash, not bash, so if you have #!/bin/sh then your script will be using a different shell than you expect. Dash does not have the <<< redirection operator.
Make sure the shebang line is:
#!/bin/bash
or
#!/usr/bin/env bash
And run the script with:
$ ./script.sh
Do not run it with an explicit sh as that will ignore the shebang:
$ sh ./script.sh # Don't do this!
If you're using the following to run your script:
sudo sh ./script.sh
Then you'll want to use the following instead:
sudo bash ./script.sh
The reason for this is that Bash is not the default shell for Ubuntu. So, if you use "sh" then it will just use the default shell; which is actually Dash. This will happen regardless if you have #!/bin/bash at the top of your script. As a result, you will need to explicitly specify to use bash as shown above, and your script should run at expected.
Dash doesn't support redirects the same as Bash.
Docker:
I was getting this problem from my Dockerfile as I had:
RUN bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)
However, according to this issue, it was solved:
The exec form makes it possible to avoid shell string munging, and
to RUN commands using a base image that does not contain /bin/sh.
Note
To use a different shell, other than /bin/sh, use the exec form
passing in the desired shell. For example,
RUN ["/bin/bash", "-c", "echo hello"]
Solution:
RUN ["/bin/bash", "-c", "bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)"]
Notice the quotes around each parameter.
You can get the output of that command and put it in a variable. then use heredoc. for example:
nc -l -p 80 <<< "tested like a charm";
can be written like:
nc -l -p 80 <<EOF
tested like a charm
EOF
and like this (this is what you want):
text="tested like a charm"
nc -l -p 80 <<EOF
$text
EOF
Practical example in busybox under docker container:
kasra#ubuntu:~$ docker run --rm -it busybox
/ # nc -l -p 80 <<< "tested like a charm";
sh: syntax error: unexpected redirection
/ # nc -l -p 80 <<EOL
> tested like a charm
> EOL
^Cpunt! => socket listening, no errors. ^Cpunt! is result of CTRL+C signal.
/ # text="tested like a charm"
/ # nc -l -p 80 <<EOF
> $text
> EOF
^Cpunt!
do it the simpler way,
direc=$(basename `pwd`)
Or use the shell
$ direc=${PWD##*/}
Another reason to the error may be if you are running a cron job that updates a subversion working copy and then has attempted to run a versioned script that was in a conflicted state after the update...
On my machine, if I run a script directly, the default is bash.
If I run it with sudo, the default is sh.
That’s why I was hitting this problem when I used sudo.
In my case error is because i have put ">>" twice
mongodump --db=$DB_NAME --collection=$col --out=$BACKUP_LOCATION/$DB_NAME-$BACKUP_DATE >> >> $LOG_PATH
i just correct it as
mongodump --db=$DB_NAME --collection=$col --out=$BACKUP_LOCATION/$DB_NAME-$BACKUP_DATE >> $LOG_PATH
Before running the script, you should check first line of the shell script for the interpreter.
Eg:
if scripts starts with /bin/bash , run the script using the below command
"bash script_name.sh"
if script starts with /bin/sh, run the script using the below command
"sh script_name.sh"
./sample.sh - This will detect the interpreter from the first line of the script and run.
Different Linux distributions having different shells as default.
I run into an issue last week that drives me crazy. I wrote a BASH script which does a remote ssh connection to acamai and than performs a simple 'ls'. I want to redirect the 'ls' sdtout output to a given file.
While the script itself works like a charm when run manually, it does not while it runs via cron. The cronjob runs as root and each command works as expected expect the ssh command. My System is Gentoo Linux and cron is the old but gold vixie-cron.
To reduce the 200 LOC I put the basics herein which alone (as a single script) are enough to demonstrate the problem.
#!/bin/bash
PATH='/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/bin'
#set -x
shopt -s lastpipe
exec 2>log.out
(ssh -i <path to key> -o HostKeyAlgorithms=+ssh-dss -o StrictHostKeyChecking=no <account#example.com> 'ls -r <path>') > '/root/listing.txt'
Even in -vvv debug mode of ssh I can see, that everything works...just except that I get no stdout output.
Than I tried something else that I found in another posting on the internet:
#!/bin/bash
PATH='/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/bin'
#set -x
shopt -s lastpipe
exec 2>log.out
(ssh -T -i <path to key> -o HostKeyAlgorithms=+ssh-dss -o StrictHostKeyChecking=no <account#example.com> 'ls -r <path>' </dev/zero) > '/root/listing.txt'
Drawback here, I start a ssh session that I can't close and I guess its due to /dev/zero.
Another approach was to TEE Pipe the sub-shell of the ssh command...this worked for a short time ( and why not yet anymore ?!)
Now I'm clueless and need help. Cron has its PATH, uses BASH etc. Curious my boss did that with success with java (and he hates BASH...).
Any explanation and helpful tips are greatly welcome.
I have same issue, I make script for CRON and it gets output from remote SSH host.
If i run script manually - it works as should. But when CRON runs it - i get just a part of remote output.
I cant realise why its happening.
#!/bin/sh
pass=123
filelist=$(sshpass -p "$pass" ssh -q -tt -o StrictHostKeyChecking=no user#"10.10.10.10" "list")
filestring=$(echo "$filelist" | grep -Po "(\S+\s\S+\s+\d+\s\d{2}:\d{2}:\d{2}\s\d{4})\slist0\.lst")
filedate=${filestring% list0.lst}
echo $filedate
filestamp=$(date -d "$filedate" +"%s")
echo $filestamp
When i get echos in file via CRON - there are date from 0:00:00 - field with date (echo $filedate) is empty. But when i run manually - i get normal date with time...
It really bother me.
Help?
I found solution - add "-tt" to ssh command and all input goes to variable.
filelist=$(sshpass -p "$pass" ssh -q -tt -o StrictHostKeyChecking=no user#"10.10.10.10" "list")
Background: I have a very large .sql file, which causes a timeout when I import it into my MySQL server. To fix this in Mac OS X, i run: split -p 'DROP TABLE IF EXISTS' my-backup-file.sql which causes a series of smaller files which then don't cause the timeout.
My issue is:
split -p 'pattern' my-backup-file.sql works fine on Mac OS but not on GNU linux such as Ubuntu as far as I understand.
I cannot run something like `docker run -v $(pwd):/workspace some/freebsdimage /bin/bash -c 'cd /workspace && split -p pattern my-backup-file.sql' because I can't run a freebsd docker image on a non-freebsd docker host.
What alternative is there in Ubuntu to split a file into smaller files every time a pattern occurs?
The answer which works for me, as #Mark Plotnick states above, is
csplit -k inputfile '/pattern/' '{99999}'
As per project requirement, i need to check the content of zip file generated which been generated on remote machine.This entire activity is done using automation framework suites. which has been written in shell scripts. I am performing above activity using ssh command abd execute unzip command with -l and -q switches. But this command is getting failed. and shows below error messages.
[SOMEUSER#MACHINE IP Function]$ ./TESTS.sh
ssh SOMEUSER#MACHINE IP unzip -l -q SOME_PATH/20130409060734*.zip | grep -i XML |wc -l
unzip: cannot find or open SOME_PATH/20130409060734*.zip, SOME_PATH/20130409060734*.zip.zip or SOME_PATH/20130409060734*.zip.ZIP.
No zipfiles found.
0
the same command i had written manually but that works properly. I really have no idea.Why this is getting failed whenever i executed via shell scripts.
[SOMEUSER#MACHINE IP Function]$ ssh SOMEUSER#MACHINE IP unzip -l -q SOME_PATH/20130409060734*.zip | grep -i XML |wc -l
2
Kindly help me to resolve that issue.
Thanks in Advance,
Priyank Shah
when you run the command from your local machine, the asterisk character is being expanded on your local machine before it is passed on to your remote ssh command. So your command is expecting to find SOME_PATH/20130409060734*.zip files on your machine and insert them into your ssh command to be passed to the other machine, whereas you (I'm assuming) mean, SOME_PATH/20130409060734*.zip files on the remote machine.
for that, precede the * character by a backslash ( \ ) and see if it helps you. In some shells escape character might be defined differently and if yours is one of them you need to find the escape character and use that one instead. Also, use quotes around the commands being passed to other server. Your command line should look something like this in my opinion:
ssh SOMEUSER#MACHINE_IP "/usr/bin/unzip -l -q SOME_PATH/20130409060734\*.zip | grep -i XML |wc -l"
Hope this helps
I have a complex qsub command to run remotely.
PROJECT_NAME_TEXT="TEST PROJECT"
PACK_ORGANIZATION="--source-organization \'MY, ORGANIZATION\'"
CONTACT_NAME="--contact-name \'Tom Riddle\'"
PROJECT_NAME_PACK="--project-name \"${PROJECT_NAME_TEXT}\""
INPUTARGS="${PACK_ORGANIZATION} ${CONTACT_NAME} ${PROJECT_NAME_PACK}"
ssh mycluster "qsub -v argv="$INPUTARGS" -l walltime=10:00:00 -l vmem=8GB -l nodes=1:ppn=4 /myscript_path/run.script"
The problem is the remote cluster doesn't recognise the qsub command, it always showing incorrect qsub command or simply alway queued on cluster because of input args are wrong.
It must be the escaping problem, my question is how to escape the command above properly ?
Try doing this using a here-doc : you have a quote conflict (nested double quotes that is an error):
#!/bin/bash
PROJECT_NAME_TEXT="TEST PROJECT"
PACK_ORGANIZATION="--source-organization \'MY, ORGANIZATION\'"
CONTACT_NAME="--contact-name \'Tom Riddle\'"
PROJECT_NAME_PACK="--project-name \"${PROJECT_NAME_TEXT}\""
INPUTARGS="${PACK_ORGANIZATION} ${CONTACT_NAME} ${PROJECT_NAME_PACK}"
ssh mycluster <<EOF
qsub -v argv="$INPUTARGS" -l walltime=10:00:00 -l vmem=8GB -l nodes=1:ppn=4 /myscript_path/run.script
EOF
As you can see, here-docs are really helpful for inputs with quotes.
See man bash | less +/'Here Documents'
Edit
from your comments :
I used this method but it gives me "Pseudo-terminal will not be allocated because stdin is not a terminal."
You can ignore this warning with
ssh mycluster <<EOF 2>/dev/null
(try the -t switch for ssh if needed)
If you have
-bash: line 2: EOF: command not found
I think you have a copy paste problem. Try to remove extra spaces on all end lines
And it seems this method cannot pass local variable $INPUTARGS to the remote cluster
it seems related to your EOF problem.
$argv returns nothing on remote cluster
What does this means ? $argv is not a pre-defined variable in bash. If you need to list command line arguments, use the pre-defined variable $#
Last thing : ensure you are using bash
Your problem is not the length, but the nesting of your quotes - in this line, you are trying to use " inside ", which won't work:
ssh mycluster "qsub -v argv="$INPUTARGS" -l walltime=10:00:00 -l vmem=8GB -l nodes=1:ppn=4 /myscript_path/run.script"
Bash will see this as "qsub -v argv=" followed by $INPUTARGS (not quoted), followed by " -l walltime=10:00:00 -l vmem=8GB -l nodes=1:ppn=4 /myscript_path/run.script".
It's possible that backslash-escaping those inner quotes will have the desired effect, but nesting quotes in bash can get rather confusing. What I often try to do is add an echo at the beginning of the command, to show how the various stages of expansion pan out. e.g.
echo 'As expanded locally:'
echo ssh mycluster "qsub -v argv=\"$INPUTARGS\" -l walltime=10:00:00 -l vmem=8GB -l nodes=1:ppn=4 /myscript_path/run.script"
echo 'As expanded remotely:'
ssh mycluster "echo qsub -v argv=\"$INPUTARGS\" -l walltime=10:00:00 -l vmem=8GB -l nodes=1:ppn=4 /myscript_path/run.script"
Thanks for all the answers, however their methods will not work on my case. I have to answer this by myself since this problem is pretty complex, I got the clue from existing solutions in stackoverflow.
There are 2 problems must be solved in my case.
Pass local program's parameters to the remote cluster. Here-doc solution doesn't work in this case.
Run qsub on remote cluster with the a long variable as arguments that contain quote symbol.
Problem 1.
Firstly, I have to introduce my script that runs on local machine takes parameters like this:
scripttoberunoncluster.py --source-organisation "My_organization_my_department" --project-name "MyProjectName" --processes 4 /targetoutputfolder/
The real parameter is far more longer than above, so all the parameter must be sent to remote. They are sent in file like this:
PROJECT_NAME="MyProjectName"
PACK_ORGANIZATION="--source-organization '\\\"My_organization_my_department\\\"'" # multiple layers of escaping, remove all the spaces
PROJECT_NAME_PACK="--project-name '\\\"${PROJECT_NAME}\\\"'"
PROCESSES_="--processes 4"
TARGET_FOLDER_PACK="/targetoutputfolder/"
INPUTARGS="${PACK_ORGANIZATION} ${PROJECT_NAME_PACK} ${PROCESSES} ${TARGET_FOLDER_PACK}"
echo $INPUTARGS > "TempPath/temp.par"
scp "TempPath/temp.par" "remotecluster:/remotepath/"
My solution is sort of compromising. But in this way the remote cluster can run script with arguments contain quote symbol. If you don't put all your variable (as parameters) in a file and transfer it to remote cluster, no matter how you pass them into variable, the quote symbol will be removed.
Problem 2.
Check how the qsub runs on remote cluster.
ssh remotecluster "qsub -v argv=\"`cat /remotepath/temp.par`\" -l walltime=10:00:00 /remotepath/my.script"
And in the my.script:
INPUT_ARGS=`echo $argv`
python "/pythonprogramlocation/scripttoberunoncluster.py" $INPUT_ARGS ; #note: $INPUT_ARGS hasn't quote
The described escaping problem consists in the requirement to preserve final quotes around arguments after two evaluation processes, i. e. after two evaluations we should see something like:
--source-organization "My_organization_my_department" --project-name "MyProjectName" --processes 4 /targetoutputfolder/
This can be achieved codewise by first putting each argument in a separate variable and then enclosing the argument with single quotes while making sure that possible single quotes inside the argument string get "escaped" with '\'' (in fact, the argument will be split up into separate strings but, when used, the split-up argument will automatically get re-concatenated by the string evaluation mechanism of UNIX (POSIX?) shells). And this procedure has to be repeated three times.
{
escsquote="'\''"
PROJECT_NAME="MyProjectName"
myorg="My_organization_my_department"
myorg="'${myorg//\'/${escsquote}}'" # bash
myorg="'${myorg//\'/${escsquote}}'"
myorg="'${myorg//\'/${escsquote}}'"
PACK_ORGANIZATION="--source-organization ${myorg}"
pnp="${PROJECT_NAME}"
pnp="'${pnp//\'/${escsquote}}'"
pnp="'${pnp//\'/${escsquote}}'"
pnp="'${pnp//\'/${escsquote}}'"
PROJECT_NAME_PACK="--project-name ${pnp}"
PROCESSES="--processes 4"
TARGET_FOLDER_PACK="/targetoutputfolder/"
INPUTARGS="${PACK_ORGANIZATION} ${PROJECT_NAME_PACK} ${PROCESSES} ${TARGET_FOLDER_PACK}"
echo "$INPUTARGS"
eval echo "$INPUTARGS"
eval eval echo "$INPUTARGS"
echo
ssh -T localhost <<EOF
echo qsub -v argv="$INPUTARGS" -l walltime=10:00:00 -l vmem=8GB -l nodes=1:ppn=4 /myscript_path/run.script
EOF
}
For further information please see:
Quotes exercise - how to do ssh inside ssh whilst running sql inside second ssh?
Quoting in ssh $host $FOO and ssh $host "sudo su user -c $FOO" type constructs.