Passing an argument with single and double quotes to shell script - linux

We are trying to pass an argument with single and double quotes to a shell script and executing with this argument. In echo its printing correctly but for the command we are getting as "Unterminated quoted value"
Please see the script and argument passing method:
[root#geopc]/root# ./myscript.sh 'localhost:9199/jmxrmi -O java.lang:type=GarbageCollector,name="PS MarkSweep" -A CollectionCount -K duration'
#!/bin/bash
out=`/usr/lib64/nagios/plugins/check_jmx -U service:jmx:rmi:///jndi/rmi://$1`
echo $1
echo $out
After executing we are getting output as
$1 : localhost:9199/jmxrmi -O java.lang:type=GarbageCollector,name="PS MarkSweep" -A CollectionCount -K duration
$out : JMX CRITICAL Unterminated quoted value
In Shell Script we we hard code value of $1 and then executing we are getting correct result.
We tried passing of arguments as follows:
./myscript.sh 'localhost:9199/jmxrmi -O java.lang:type=GarbageCollector,name=\"PS MarkSweep\" -A CollectionCount -K duration'
in this case error is : JMX CRITICAL Invalid character '"' in value part of property
./myscript.sh 'localhost:9199/jmxrmi -O java.lang:type=GarbageCollector,name="\"PS MarkSweep\"" -A CollectionCount -K duration'
in this case error is JMX CRITICAL Unterminated quoted value
So anyone please help me on it.

The problem is that you're expecting quotes and escapes in variables to be interpreted the same as quotes in your script.
In Java terms, this is the same as expecting:
String input="\"foo\", \"bar\"";
String[] parameters = { input };
to be the same as this:
String[] parameters = { "foo", "bar" };
The real solution in both language is to let the input be an array of elements:
#!/bin/bash
out=`/usr/lib64/nagios/plugins/check_jmx -U service:jmx:rmi:///jndi/rmi://"$1" "${#:2}"`
echo "$1"
echo "$out"
and then running:
./myscript.sh localhost:9199/jmxrmi -O java.lang:type=GarbageCollector,name="PS MarkSweep" -A CollectionCount -K duration
HOWEVER!!
This requires discipline and understanding.
You'll notice that this is not a drop-in solution: the script is now being called with multiple parameters. This is a very strict requirement. If this script is being called by another program, that program also has to be aware of this. It needs to be handled end-to-end.
If you store this in a config file for example, it'll be YOUR responsibility to serialize the string in such a way that YOU can deserialized into a list. You can't just hope that it'll work out, because it won't. Unlike Java, there will be no compiler warning when you confuse strings and lists.
If this is not something you want to get involved with, you can instead use eval. This is a bad practice because of security and robustness issues, but sometimes it's Good Enough:
yourcommand="/usr/lib64/nagios/plugins/check_jmx -U service:jmx:rmi:///jndi/rmi://$1"
echo "$yourcommand" # Make sure this writes the right command
eval "$yourcommand" # This will execute it as if you copy-pasted it

Related

How do I execute a "runuser" command, with arguments, in a bash script?

I have a niche requirement to run commands stored in another config file within a wrapper script for another utility. My wrapper script (below) works for every command in that config file that does not use "runuser" and also include arguments. If a command uses runuser and my "-c" command includes arguments, the script fails.
The wrapper script
#!/bin/bash
nagios_cmd=$(grep $1 /etc/nagios/nrpe.cfg | awk -F "=" {'print $2'})
exec=$($nagios_cmd)
if [ $? -eq 0 ]
then
#exitok
echo $exec
exit 0
else
#exitcritical
echo $exec
exit 1001
fi
The config file
command[check_crsdb_state]=sudo /usr/lib64/nagios/plugins/check_crsdb_state
command[check_crsasm_state]=sudo /usr/lib64/nagios/plugins/check_crsasm_state
command[check_ora1_tablespace_apex]=sudo /usr/sbin/runuser -l oracle -c '/check_oracle_tablespace APEX 32000'
command[check_ora1_tablespace_lob1]=sudo /usr/sbin/runuser -l oracle -c '/check_oracle_tablespace LOB1 32000'
Successful Script Run
[root#quo-mai-ora1 /]# ./rmmwrapper.sh check_crsasm_state
OK - All nodes report 'Started,STABLE'
[root#quo-mai-ora1 /]#
Failure Script Run
[root#quo-mai-ora1 /]# ./rmmwrapper.sh check_ora1_tablespace_apex
APEX: -c: line 0: unexpected EOF while looking for matching `''
APEX: -c: line 1: syntax error: unexpected end of file
[root#quo-mai-ora1 /]#
Failure Script Run (with bash -x)
[root#quo-mai-ora1 /]# bash -x ./rmmwrapper.sh check_ora1_tablespace_apex
++ grep check_ora1_tablespace_apex /etc/nagios/nrpe.cfg
++ awk -F = '{print $2}'
+ nagios_cmd='sudo /usr/sbin/runuser -l oracle -c '\''/check_oracle_tablespace APEX 32000'\'''
++ sudo /usr/sbin/runuser -l oracle -c ''\''/check_oracle_tablespace' APEX '32000'\'''
APEX: -c: line 0: unexpected EOF while looking for matching `''
APEX: -c: line 1: syntax error: unexpected end of file
+ exec=
+ '[' 1 -eq 0 ']'
+ echo
+ exit 1001
[root#quo-mai-ora1 /]#
The Problem
You can see in the bash -x output, that for some reason when the $nagios_cmd gets executed, it places single quotes before the spaces separate multiple args that are supplied to that resulting script (/check_oracle_tablespace). I've tried different ways of executing $nagios_cmd (using backticks instead etc. I've tried escaping the space characters by modifying the config file to look like this:
command[check_ora1_tablespace_apex]=sudo /usr/sbin/runuser -l oracle -c '/check_oracle_tablespace\ APEX\ 32000'
I've also tried encapsulating the command after -c on runuser in double quotes instead of single, or just no quotes at all.
I'm clearly missing something fundamentally wrong with bash. How can I get the script to just execute the contents of $nagios_cmd as it appears in plain text?
This looks like one of those rare cases where eval is actually the right answer. Try this:
exec=$(eval "$nagios_cmd")
Explanation: bash doesn't expand variables until fairly late in the process of parsing commands, so the string in the variable isn't parsed like it would be if it were actually part of the command. In this case, the problem is that it's expanded after quotes and escapes have been parsed, so it's too late for the single-quotes around the multi-word command to have their intended effect. See BashFAQ #50: "I'm trying to put a command in a variable, but the complex cases always fail!"
What eval does is essentially re-run the entire command parsing process from the beginning. So the variable gets expanded out to the command you want to run (including quotes, etc), and that gets parsed like it would be normally.
Note that I did put double-quotes around the variable; that's so it doesn't go through the partial-parsing process that is done to unquoted variable references, and then through the full parsing process. This one-and-a-half-times parsing process can have rare but really weird effects, so it's best avoided.
Also: eval has a well-deserved bad reputation (I've used the phrase "massive bug magnet" to describe it). This is because what it fundamentally does is treat data (e.g. the contents of variables) as executable code, so it's easy to find that you're doing things like executing parts of your filenames as commands. But in this case, your data is supposed to be a command, and is (hopefully) trusted not to contain malicious, invalid, etc content. In this case, you want the data to be treated as executable code.

Bash command line arguments passed to sed via ssh

I am looking to write a simple script to perform a SSH command on many hosts simultaneously, and which hosts exactly are generated from another script. The problem is that when I run the script using sometihng like sed it doesn't work properly.
It should run like sshall.sh {anything here} and it will run the {anything here} part on all the nodes in the list.
sshall.sh
#!/bin/bash
NODES=`listNodes | grep "node-[0-9*]" -o`
echo "Connecting to all nodes and running: ${#:1}"
for i in $NODES
do
:
echo "$i : Begin"
echo "----------------------------------------"
ssh -q -o "StrictHostKeyChecking no" $i "${#:1}"
echo "----------------------------------------"
echo "$i : Complete";
echo ""
done
When it is run with something like whoami it works but when I run:
[root#myhost bin]# sshall.sh sed -i '/^somebeginning/ s/$/,appendme/' /etc/myconfig.conf
Connecting to all nodes and running: sed -i /^somebeginning/ s/$/,appendme/ /etc/myconfig.conf
node-1 : Begin
----------------------------------------
sed: -e expression #1, char 18: missing command
----------------------------------------
node-1 : Complete
node-2 : Begin
----------------------------------------
sed: -e expression #1, char 18: missing command
----------------------------------------
node-2 : Complete
…
Notice that the quotes disappear on the sed command when sent to the remote client.
How do I go about fixing my bash command?
Is there a better way of achieving this?
Substitute an eval-safe quoted version of your command into a heredoc:
#!/bin/bash
# ^^^^- not /bin/sh; printf %q is an extension
# Put your command into a single string, with each argument quoted to be eval-safe
printf -v cmd_q '%q ' "$#"
while IFS= read -r hostname; do
# run bash -s remotely, with that string passed on stdin
ssh -q -o 'StrictHostKeyChecking no' "$hostname" "bash -s" <<EOF
$cmd_q
EOF
done < <(listNodes | grep -o -e "node-[0-9*]")
Why this works reliably (and other approaches don't):
printf %q knows how to quote contents to be eval'd by that same shell (so spaces, wildcards, various local quoting methods, etc. will always be supported).
Arguments given to ssh are not passed to the remote command individually!
Instead, they're concatenated into a string passed to sh -c.
However: The output of printf %q is not portable to all POSIX-derived shells! It's guaranteed to be compatible with the same shell locally in use -- ksh will always parse output from printf '%q' in ksh, bash will parse output from printf '%q' in bash, etc; thus, you can't safely pass this string on the remote argument vector, because it's /bin/sh -- not bash -- running there. (If you know your remote /bin/sh is provided by bash, then you can run ssh "$hostname" "$cmd_q" safely, but only under this condition).
bash -s reads the script to run from stdin, meaning that passing your command there -- not on the argument vector -- ensures that it'll be parsed into arguments by the same shell that escaped it to be shell-safe.
You want to pass the entire command -- with all of its arguments, spaces, and quotation marks -- to ssh so it can pass it unchanged to the remote shell for parsing.
One way to do that is to put it all inside single quotation marks. But then you'll also need to make sure the single quotation marks within your command are preserved in the arguments, so the remote shell builds the correct arguments for sed.
sshall.sh 'sed -i '"'"'/^somebeginning/ s/$/,appendme/'"'"' /etc/myconfig.conf'
It looks redundant, but '"'"' is a common Bourne trick to get a single quotation mark into a single-quoted string. The first quote ends single-quoting temporarily, the double-quote-single-quote-double-quote construct appends a single quotation mark, and then the single quotation mark resumes your single-quoted section. So to speak.
Another trick that can be helpful for troubleshooting is to add the -v flag do your ssh flags, which will spit out lots of text, but most importantly it will show you exactly what string it's passing to the remote shell for parsing and execution.
--
All of this is fairly fragile around spaces in your arguments, which you'll need to avoid, since you're relying on shell parsing on the opposite end.
Thinking outside the box: instead of dealing with all the quoting issues and the word-splitting in the wrong places, you could attempt to a) construct the script locally (maybe use a here-document?), b) scp the script to the remote end, then c) invoke it there. This easily allows more complex command sequences, with all the power of shell control constructs etc. Debugging (checking proper quoting) would be a breeze by simply looking at the generated script.
I recommend reading the command(s) from the standard input rather than from the command line arguments:
cmd.sh
#!/bin/bash -
# Load server_list with user#host "words" here.
cmd=$(</dev/stdin)
for h in ${server_list[*]}; do
ssh "$h" "$cmd"
done
Usage:
./cmd.sh <<'CMD'
sed -i '/^somebeginning/ s/$/,appendme/' /path/to/file1
# other commands
# here...
CMD
Alternatively, run ./cmd.sh, type the command(s), then press Ctrl-D.
I find the latter variant the most convenient, as you don't even need for here documents, no need for extra escaping. Just invoke your script, type the commands, and press the shortcut. What could be easier?
Explanations
The problem with your approach is that the quotes are stripped from the arguments by the shell. For example, the argument '/^somebeginning/ s/$/,appendme/' will be interpreted as /^somebeginning/ s/$/,appendme/ string (without the single quotes), which is an invalid argument for sed.
Of course, you can escape the command with the built-in printf as suggested in other answer here. But the command becomes not very readable after escaping. For example
printf %q 'sed -i /^somebeginning/ s/$/,appendme/ /home/ruslan/tmp/file1.txt'
produces
sed\ -i\ /\^somebeginning/\ s/\$/\,appendme/\ /home/ruslan/tmp/file1.txt
which is not very readable, and will look ugly, if you print it to the screen in order to show the progress.
That's why I prefer to read from the standard input and leave the command intact. My script prints the command strings to the screen, and I see them just in the form I have written them.
Note, the for .. in loop iterates $IFS-separated "words", and is generally not preferred way to traverse an array. It is generally better to invoke read -r in a while loop with adjusted $IFS. I have used the for loop for simplicity, as the question is really about invoking the ssh command.
Logging into multiple systems over SSH and using the same (or variations on the same) command is the basic use case behind ansible. The system is not without significant flaws, but for simple use cases is pretty great. If you want a more solid solution without too much faffing about with escaping and looping over hosts, take a look.
Ansible has a 'raw' module which doesn't even require any dependencies on the target hosts, and you might find that a very simple way to achieve this sort of functionality in a way that frees you from the considerations of looping over hosts, handling errors, marshalling the commands, etc and lets you focus on what you're actually trying to achieve.

Split variable into multiple variables

I understand this has been asked before but the answer don't quite give me what I need. I pgrep for a given string which returns a list of PID's containing that string in a variable ($testpid in this case). I then try and split each one of the PID's out, they are sepereated with a space like so:
PIDS:
17717 172132 2138213
Code:
IFS=" " read -a pidarray <<< "$testpid"
echo pidarray[0]
*instead of the echo above i would be assigning each item in the array to its own variable
But I get the following error:
syntax error: redirection unexpected
Your syntax was almost correct:
IFS=' ' read -r -a pidarray <<< "$testpid"
echo "${pidarray[0]}"
Note the curly braces needed for the array dereference.
More importantly, check that your shebang is #!/bin/bash, or that you executed your script with bash yourscript, not sh yourscript. The error given is consistent with a shell that doesn't recognize <<< as valid syntax -- which any remotely modern bash always will when invoked as bash; even if /bin/sh points to the same executable, it tells bash to disable many of its extensions from the POSIX spec.
Now, that said -- if your goal is to assign each entry to its own variable, you don't need (and shouldn't use) read -a at all!
Instead:
IFS=' ' read -r first second third rest <<<"$testpid"
printf '%s\n' "First PID is $first" "Second PID is $second"
You can try this one.
testpid="17717 172132 2138213"
set -- $testpid
echo -e $1 $2
After that use the $1,$2,$3 to get that separately.

Shell Scripting: Quotes Around Variable with \$

I've just started learning about shell scripting and have been trying to workout what's going on in this script: http://dev.cloudtrax.com/wiki/ng-cs-ip-logging
Specifically, I can't wrap my head around a couple of lines that use "\$foo" for example:
[ -z "\$plug_event" ] && return
Everything I've read and learned about shell scripting has me believing that "\$plug_event" would evaluate as a string whose value is $plug_event. This would mean that the above test would always return a 1 (i.e. the test was false), right? If so, what's the point?
I've found plenty on quotes around variables but so far I haven't been able to find a single example of this kind of usage. Is it just a typo? Unfortunately I'm nowhere near experienced enough to tell the difference yet.
All help is much appreciated, and a link to a relevant document would certainly suffice.
Cheers,
Kyle
The reason for escaping all the $s is that those lines are part of here-docs
The command cat > /etc/ip_logging.sh << EOF will output all of following text into /etc/ip_logging.sh until it hits EOF, and not to evaluate the variables in the current script, the $ has to be escaped.
Alternatively, and to make the code easier to read, putting the terminating string in single quotes will disable variable substitution in the heredoc:
cat > /etc/ip_logging.sh << 'EOF'
[ -z "$plug_event" ]
#other stuff
EOF
will have same result, sans the escaped $ and other shell special characters
The test:
[ -z "\$plug_event" ]
is pointless. The string is never zero length; the return after it is never executed.
Whoever wrote that code did not understand what they were doing ... unless there are extenuating circumstances such as it is part of a here-doc which is then treated specially.
But, standing on its own, the statement is pointless.
...look at the code...
# create iptables script on the fly
cat > /etc/ip_logging.sh << EOF
#!/bin/sh
. /etc/functions.sh
install_rule() {
config_get plug_event "\$1" plug_event
[ -z "\$plug_event" ] && return
pub_ip=\$(uci get dhcp.pub.ipaddr)
pub_mask=\$(uci get dhcp.pub.netmask)
priv_ip=\$(uci get dhcp.priv.ipaddr)
priv_mask=\$(uci get dhcp.priv.netmask)
iptables -I POSTROUTING -t nat -o br-\$1 -s \$pub_ip/\$pub_mask -j LOG --log-level debug --log-prefix "iplog: "
iptables -I POSTROUTING -t nat -o br-\$1 -s \$priv_ip/\$priv_mask -j LOG --log-level debug --log-prefix "iplog: "
}
config_load network
config_foreach install_rule interface
EOF
Someone did know more or less what they are up to; they are writing a script in a here-doc and need the parameters expanded when the script that's being generated is executed, not when it is created. They could have simplified life by using:
# create iptables script on the fly
cat > /etc/ip_logging.sh << 'EOF'
The quotes around the end marker mean that no expansions are done in the here-doc, so all the backslashes could go.
The bash manual is your friend. Shell parameter expansion is one relevant section, but it covers actual expansions, not suppressed expansions.

Reading a parameter through command prompt (starts with "-")

I am facing a problem while reading input from command prompt in shell script. My script's name is status.ksh, and I have to take parameter from command prompt. This script accepts 2 parameter. 1st is "-e" and second is "server_name".
When I am running the script like this,
status.ksh -e server_name
echo $#
is giving output "server_name" only, where as expected output should be "-e server_name"
and echo $1 is giving output as NULL, where as expected output should be "-e".
Please guide me, how to read get the 1st parameter, which is "-e" .
Thanks & Regards
The problem was caused by -e. This is a flag for echo.
-e enable interpretation of backslash escapes
Most of the unix commands allow -- to be used to separate flags and the rest of the arguments, but echo doesn't support this, so you need another command:
printf "%s\n" "$1"
If you need complex command line argument parsing, definitely go with getopts as Joe suggested.
Have you read this reference? http://www.lehman.cuny.edu/cgi-bin/man-cgi?getopts+1
You shouldn't use $1, $2, $#, etc to parse options. There are builtins that can handle this for you.
Example 2 Processing Arguments for a Command with Options
The following fragment of a shell program processes the
arguments for a command that can take the options -a or -b.
It also processes the option -o, which requires an option-
argument:
while getopts abo: c
do
case $c in
a | b) FLAG=$c;;
o) OARG=$OPTARG;;
\?) echo $USAGE
exit 2;;
esac
done
shift `expr $OPTIND - 1`
More examples:
http://linux-training.be/files/books/html/fun/ch21s05.html
http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.cmds/doc/aixcmds2/getopts.htm
http://www.livefirelabs.com/unix_tip_trick_shell_script/may_2003/05262003.htm

Resources