Shell Scripting: Quotes Around Variable with \$ - linux

I've just started learning about shell scripting and have been trying to workout what's going on in this script: http://dev.cloudtrax.com/wiki/ng-cs-ip-logging
Specifically, I can't wrap my head around a couple of lines that use "\$foo" for example:
[ -z "\$plug_event" ] && return
Everything I've read and learned about shell scripting has me believing that "\$plug_event" would evaluate as a string whose value is $plug_event. This would mean that the above test would always return a 1 (i.e. the test was false), right? If so, what's the point?
I've found plenty on quotes around variables but so far I haven't been able to find a single example of this kind of usage. Is it just a typo? Unfortunately I'm nowhere near experienced enough to tell the difference yet.
All help is much appreciated, and a link to a relevant document would certainly suffice.
Cheers,
Kyle

The reason for escaping all the $s is that those lines are part of here-docs
The command cat > /etc/ip_logging.sh << EOF will output all of following text into /etc/ip_logging.sh until it hits EOF, and not to evaluate the variables in the current script, the $ has to be escaped.
Alternatively, and to make the code easier to read, putting the terminating string in single quotes will disable variable substitution in the heredoc:
cat > /etc/ip_logging.sh << 'EOF'
[ -z "$plug_event" ]
#other stuff
EOF
will have same result, sans the escaped $ and other shell special characters

The test:
[ -z "\$plug_event" ]
is pointless. The string is never zero length; the return after it is never executed.
Whoever wrote that code did not understand what they were doing ... unless there are extenuating circumstances such as it is part of a here-doc which is then treated specially.
But, standing on its own, the statement is pointless.
...look at the code...
# create iptables script on the fly
cat > /etc/ip_logging.sh << EOF
#!/bin/sh
. /etc/functions.sh
install_rule() {
config_get plug_event "\$1" plug_event
[ -z "\$plug_event" ] && return
pub_ip=\$(uci get dhcp.pub.ipaddr)
pub_mask=\$(uci get dhcp.pub.netmask)
priv_ip=\$(uci get dhcp.priv.ipaddr)
priv_mask=\$(uci get dhcp.priv.netmask)
iptables -I POSTROUTING -t nat -o br-\$1 -s \$pub_ip/\$pub_mask -j LOG --log-level debug --log-prefix "iplog: "
iptables -I POSTROUTING -t nat -o br-\$1 -s \$priv_ip/\$priv_mask -j LOG --log-level debug --log-prefix "iplog: "
}
config_load network
config_foreach install_rule interface
EOF
Someone did know more or less what they are up to; they are writing a script in a here-doc and need the parameters expanded when the script that's being generated is executed, not when it is created. They could have simplified life by using:
# create iptables script on the fly
cat > /etc/ip_logging.sh << 'EOF'
The quotes around the end marker mean that no expansions are done in the here-doc, so all the backslashes could go.
The bash manual is your friend. Shell parameter expansion is one relevant section, but it covers actual expansions, not suppressed expansions.

Related

File redirection fails in Bash script, but not Bash terminal

I am having a problem where cmd1 works, but not cmd2 in my Bash script ending in .sh. I have made the Bash script executable.
Additionally, I can execute cmd2 just fine from my Bash terminal. I have tried to make a minimally reproducible example, but my larger goal is to run a complicated executable with command line arguments and pass output to a file that may or may not exist (rather than displaying the output in the terminal).
Replacing > with >> also gives the same error in the script, but not the terminal.
My Bash script:
#!/bin/bash
cmd1="cat test.txt"
cmd2="cat test.txt > a"
echo $cmd1
$cmd1
echo $cmd2
$cmd2
test.txt has the words "dog" and "cat" on two separate lines without quotes.
Short answer: see BashFAQ #50: I'm trying to put a command in a variable, but the complex cases always fail!.
Long answer: the shell expands variable references (like $cmd1) toward the end of the process of parsing a command line, after it's done parsing redirects (like > a is supposed to be) and quotes and escapes and... In fact, the only thing it does with the expanded value is word splitting (e.g. treating cat test.txt > a as "cat" followed by "test.txt", ">", and finally "a", rather than a single string) and wildcard expansion (e.g. if $cmd expanded to cat *.txt, it'd replace the *.txt part with a list of matching files). (And it skips word splitting and wildcard expansion if the variable is in double-quotes.)
Partly as a result of this, the best way to store commands in variables is: don't. That's not what they're for; variables are for data, not commands. What you should do instead, though, depends on why you were storing the command in a variable.
If there's no real reason to store the command in a variable, then just use the command directly. For conditional redirects, just use a standard if statement:
if [ -f a ]; then
cat test.txt > a
else
cat test.txt
fi
If you need to define the command at one point, and use it later; or want to use the same command over and over without having to write it out in full each time, use a function:
cmd2() {
cat test.txt > a
}
cmd2
It sounds like you may need to be able to define the command differently depending on some condition, you can actually do that with a function as well:
if [ -f a ]; then
cmd() {
cat test.txt > a
}
else
cmd() {
cat test.txt
}
fi
cmd
Alternately, you can wrap the command (without redirect) in a function, then use a conditional to control whether it redirects:
cmd() {
cat test.txt
}
if [ -f a ]; then
cmd > a
else
cmd
fi
It's also possible to wrap a conditional redirect into a function itself, then pipe output to it:
maybe_redirect_to() {
if [ -f "$1" ]; then
cat > "$1"
else
cat
fi
}
cat test.txt | maybe_redirect_to a
(This creates an extra cat process that isn't really doing anything useful, but if it makes the script cleaner, I'd consider that worth it. In this particular case, you could minimize the stray cats by using maybe_redirect_to a < test.txt.)
As a last resort, you can store the command string in a variable, and use eval to parse it. eval basically re-runs the shell parsing process from the beginning, meaning that it'll recognize things like redirects in the string. But eval has a well-deserved reputation as a bug magnet, because it's easy for it to treat parts of the string you thought were just data as command syntax, which can cause some really weird (& dangerous) bugs.
If you must use eval, at least double-quote the variable reference, so it runs through the parsing process just once, rather than sort-of-once-and-a-half as it would unquoted. Here's an example of what I mean:
cmd3="echo '5 * 3 = 15'"
eval "$cmd3"
# prints: 5 * 3 = 15
eval $cmd3
# prints: 5 [list of files in the current directory] 3 = 15
# ...unless there are any files with shell metacharacters in their names, in
# which case something more complicated might happen.
BashFAQ #50 discusses some other possible reasons and solutions. Note that the array approach will not work here, since arrays also get expanded after redirects are parsed.
If you pop an 'eval' in front of $cmd2 it should work as expected:
#!/bin/bash
cmd2="cat test.txt > a"
eval $cmd2
If you're not sure about the operation of a script you could always use the debug mode to see if you can determine the error.
bash -x scriptname
This will run the command and display the output of variable evaluations. Hopefully this will reveal any issues with syntax.

Passing an argument with single and double quotes to shell script

We are trying to pass an argument with single and double quotes to a shell script and executing with this argument. In echo its printing correctly but for the command we are getting as "Unterminated quoted value"
Please see the script and argument passing method:
[root#geopc]/root# ./myscript.sh 'localhost:9199/jmxrmi -O java.lang:type=GarbageCollector,name="PS MarkSweep" -A CollectionCount -K duration'
#!/bin/bash
out=`/usr/lib64/nagios/plugins/check_jmx -U service:jmx:rmi:///jndi/rmi://$1`
echo $1
echo $out
After executing we are getting output as
$1 : localhost:9199/jmxrmi -O java.lang:type=GarbageCollector,name="PS MarkSweep" -A CollectionCount -K duration
$out : JMX CRITICAL Unterminated quoted value
In Shell Script we we hard code value of $1 and then executing we are getting correct result.
We tried passing of arguments as follows:
./myscript.sh 'localhost:9199/jmxrmi -O java.lang:type=GarbageCollector,name=\"PS MarkSweep\" -A CollectionCount -K duration'
in this case error is : JMX CRITICAL Invalid character '"' in value part of property
./myscript.sh 'localhost:9199/jmxrmi -O java.lang:type=GarbageCollector,name="\"PS MarkSweep\"" -A CollectionCount -K duration'
in this case error is JMX CRITICAL Unterminated quoted value
So anyone please help me on it.
The problem is that you're expecting quotes and escapes in variables to be interpreted the same as quotes in your script.
In Java terms, this is the same as expecting:
String input="\"foo\", \"bar\"";
String[] parameters = { input };
to be the same as this:
String[] parameters = { "foo", "bar" };
The real solution in both language is to let the input be an array of elements:
#!/bin/bash
out=`/usr/lib64/nagios/plugins/check_jmx -U service:jmx:rmi:///jndi/rmi://"$1" "${#:2}"`
echo "$1"
echo "$out"
and then running:
./myscript.sh localhost:9199/jmxrmi -O java.lang:type=GarbageCollector,name="PS MarkSweep" -A CollectionCount -K duration
HOWEVER!!
This requires discipline and understanding.
You'll notice that this is not a drop-in solution: the script is now being called with multiple parameters. This is a very strict requirement. If this script is being called by another program, that program also has to be aware of this. It needs to be handled end-to-end.
If you store this in a config file for example, it'll be YOUR responsibility to serialize the string in such a way that YOU can deserialized into a list. You can't just hope that it'll work out, because it won't. Unlike Java, there will be no compiler warning when you confuse strings and lists.
If this is not something you want to get involved with, you can instead use eval. This is a bad practice because of security and robustness issues, but sometimes it's Good Enough:
yourcommand="/usr/lib64/nagios/plugins/check_jmx -U service:jmx:rmi:///jndi/rmi://$1"
echo "$yourcommand" # Make sure this writes the right command
eval "$yourcommand" # This will execute it as if you copy-pasted it

Bash command line arguments passed to sed via ssh

I am looking to write a simple script to perform a SSH command on many hosts simultaneously, and which hosts exactly are generated from another script. The problem is that when I run the script using sometihng like sed it doesn't work properly.
It should run like sshall.sh {anything here} and it will run the {anything here} part on all the nodes in the list.
sshall.sh
#!/bin/bash
NODES=`listNodes | grep "node-[0-9*]" -o`
echo "Connecting to all nodes and running: ${#:1}"
for i in $NODES
do
:
echo "$i : Begin"
echo "----------------------------------------"
ssh -q -o "StrictHostKeyChecking no" $i "${#:1}"
echo "----------------------------------------"
echo "$i : Complete";
echo ""
done
When it is run with something like whoami it works but when I run:
[root#myhost bin]# sshall.sh sed -i '/^somebeginning/ s/$/,appendme/' /etc/myconfig.conf
Connecting to all nodes and running: sed -i /^somebeginning/ s/$/,appendme/ /etc/myconfig.conf
node-1 : Begin
----------------------------------------
sed: -e expression #1, char 18: missing command
----------------------------------------
node-1 : Complete
node-2 : Begin
----------------------------------------
sed: -e expression #1, char 18: missing command
----------------------------------------
node-2 : Complete
…
Notice that the quotes disappear on the sed command when sent to the remote client.
How do I go about fixing my bash command?
Is there a better way of achieving this?
Substitute an eval-safe quoted version of your command into a heredoc:
#!/bin/bash
# ^^^^- not /bin/sh; printf %q is an extension
# Put your command into a single string, with each argument quoted to be eval-safe
printf -v cmd_q '%q ' "$#"
while IFS= read -r hostname; do
# run bash -s remotely, with that string passed on stdin
ssh -q -o 'StrictHostKeyChecking no' "$hostname" "bash -s" <<EOF
$cmd_q
EOF
done < <(listNodes | grep -o -e "node-[0-9*]")
Why this works reliably (and other approaches don't):
printf %q knows how to quote contents to be eval'd by that same shell (so spaces, wildcards, various local quoting methods, etc. will always be supported).
Arguments given to ssh are not passed to the remote command individually!
Instead, they're concatenated into a string passed to sh -c.
However: The output of printf %q is not portable to all POSIX-derived shells! It's guaranteed to be compatible with the same shell locally in use -- ksh will always parse output from printf '%q' in ksh, bash will parse output from printf '%q' in bash, etc; thus, you can't safely pass this string on the remote argument vector, because it's /bin/sh -- not bash -- running there. (If you know your remote /bin/sh is provided by bash, then you can run ssh "$hostname" "$cmd_q" safely, but only under this condition).
bash -s reads the script to run from stdin, meaning that passing your command there -- not on the argument vector -- ensures that it'll be parsed into arguments by the same shell that escaped it to be shell-safe.
You want to pass the entire command -- with all of its arguments, spaces, and quotation marks -- to ssh so it can pass it unchanged to the remote shell for parsing.
One way to do that is to put it all inside single quotation marks. But then you'll also need to make sure the single quotation marks within your command are preserved in the arguments, so the remote shell builds the correct arguments for sed.
sshall.sh 'sed -i '"'"'/^somebeginning/ s/$/,appendme/'"'"' /etc/myconfig.conf'
It looks redundant, but '"'"' is a common Bourne trick to get a single quotation mark into a single-quoted string. The first quote ends single-quoting temporarily, the double-quote-single-quote-double-quote construct appends a single quotation mark, and then the single quotation mark resumes your single-quoted section. So to speak.
Another trick that can be helpful for troubleshooting is to add the -v flag do your ssh flags, which will spit out lots of text, but most importantly it will show you exactly what string it's passing to the remote shell for parsing and execution.
--
All of this is fairly fragile around spaces in your arguments, which you'll need to avoid, since you're relying on shell parsing on the opposite end.
Thinking outside the box: instead of dealing with all the quoting issues and the word-splitting in the wrong places, you could attempt to a) construct the script locally (maybe use a here-document?), b) scp the script to the remote end, then c) invoke it there. This easily allows more complex command sequences, with all the power of shell control constructs etc. Debugging (checking proper quoting) would be a breeze by simply looking at the generated script.
I recommend reading the command(s) from the standard input rather than from the command line arguments:
cmd.sh
#!/bin/bash -
# Load server_list with user#host "words" here.
cmd=$(</dev/stdin)
for h in ${server_list[*]}; do
ssh "$h" "$cmd"
done
Usage:
./cmd.sh <<'CMD'
sed -i '/^somebeginning/ s/$/,appendme/' /path/to/file1
# other commands
# here...
CMD
Alternatively, run ./cmd.sh, type the command(s), then press Ctrl-D.
I find the latter variant the most convenient, as you don't even need for here documents, no need for extra escaping. Just invoke your script, type the commands, and press the shortcut. What could be easier?
Explanations
The problem with your approach is that the quotes are stripped from the arguments by the shell. For example, the argument '/^somebeginning/ s/$/,appendme/' will be interpreted as /^somebeginning/ s/$/,appendme/ string (without the single quotes), which is an invalid argument for sed.
Of course, you can escape the command with the built-in printf as suggested in other answer here. But the command becomes not very readable after escaping. For example
printf %q 'sed -i /^somebeginning/ s/$/,appendme/ /home/ruslan/tmp/file1.txt'
produces
sed\ -i\ /\^somebeginning/\ s/\$/\,appendme/\ /home/ruslan/tmp/file1.txt
which is not very readable, and will look ugly, if you print it to the screen in order to show the progress.
That's why I prefer to read from the standard input and leave the command intact. My script prints the command strings to the screen, and I see them just in the form I have written them.
Note, the for .. in loop iterates $IFS-separated "words", and is generally not preferred way to traverse an array. It is generally better to invoke read -r in a while loop with adjusted $IFS. I have used the for loop for simplicity, as the question is really about invoking the ssh command.
Logging into multiple systems over SSH and using the same (or variations on the same) command is the basic use case behind ansible. The system is not without significant flaws, but for simple use cases is pretty great. If you want a more solid solution without too much faffing about with escaping and looping over hosts, take a look.
Ansible has a 'raw' module which doesn't even require any dependencies on the target hosts, and you might find that a very simple way to achieve this sort of functionality in a way that frees you from the considerations of looping over hosts, handling errors, marshalling the commands, etc and lets you focus on what you're actually trying to achieve.

Ubuntu Bash Script executing a command with spaces

I have a bit of an issue and i've tried several ways to fix this but i can't seem to.
So i have two shell scripts.
background.sh: This runs a given command in the background and redirect's output.
#!/bin/bash
if test -t 1; then
exec 1>/dev/null
fi
if test -t 2; then
exec 2>/dev/null
fi
"$#" &
main.sh: This file simply starts the emulator (genymotion) as a background process.
#!/bin/bash
GENY_DIR="/home/user/Documents/MyScript/watchdog/genymotion"
BK="$GENY_DIR/background.sh"
DEVICE="164e959b-0e15-443f-b1fd-26d101edb4a5"
CMD="$BK player --vm-name $DEVICE"
$CMD
This works fine when i have NO spaces in my directory. However, when i try to do: GENY_DIR="home/user/Documents/My Script/watchdog/genymotion"
which i have no choice at the moment. I get an error saying that the file or directory cannot be found. I tried to put "$CMD" in quote but it didn't work.
You can test this by trying to run anything as a background process, doesn't have to be an emulator.
Any advice or feedback would be appreciated. I also tried to do.
BK="'$BK'"
or
BK="\"$BK\""
or
BK=$( echo "$BK" | sed 's/ /\\ /g' )
Don't try to store commands in strings. Use arrays instead:
#!/bin/bash
GENY_DIR="$HOME/Documents/My Script/watchdog/genymotion"
BK="$GENY_DIR/background.sh"
DEVICE="164e959b-0e15-443f-b1fd-26d101edb4a5"
CMD=( "$BK" "player" --vm-name "$DEVICE" )
"${CMD[#]}"
Arrays properly preserve your word boundaries, so that one argument with spaces remains one argument with spaces.
Due to the way word splitting works, adding a literal backslash in front of or quotes around the space will not have a useful effect.
John1024 suggests a good source for additional reading: I'm trying to put a command in a variable, but the complex cases always fail!
try this:
GENY_DIR="home/user/Documents/My\ Script/watchdog/genymotion"
You can escape the space with a backslash.

Can I avoid using a FIFO file to join the end of a Bash pipeline to be stored in a variable in the current shell?

I have the following functions:
execIn ()
{
local STORE_INvar="${1}" ; shift
printf -v "${STORE_INvar}" '%s' "$( eval "$#" ; printf %s x ; )"
printf -v "${STORE_INvar}" '%s' "${!STORE_INvar%x}"
}
and
getFifo ()
{
local FIFOfile
FIFOfile="/tmp/diamondLang-FIFO-$$-${RANDOM}"
while [ -e "${FIFOfile}" ]
do
FIFOfile="/tmp/diamondLang-FIFO-$$-${RANDOM}"
done
mkfifo "${FIFOfile}"
echo "${FIFOfile}"
}
I want to store the output of the end of a pipeline into a variable as given to a function at the end of the pipeline, however, the only way I have found to do this that will work in early versions of Bash is to use mkfifo to make a temp fifo file. I was hoping to use file descriptors to avoid having to create temporary files. So, This works, but is not ideal:
Set Up: (before I can do this I need to have assigned a FIFO file to a var that can be used by the rest of the process)
$ FIFOfile="$( getFifo )"
The Pipeline I want to persist:
$ printf '\n\n123\n456\n524\n789\n\n\n' | grep 2 # for e.g.
The action: (I can now add) >${FIFOfile} &
$ printf '\n\n123\n456\n524\n789\n\n\n' | grep 2 >${FIFOfile} &
N.B. The need to background it with & - Problem 1: I get [1] <PID_NO> output to the screen.
The actual persist:
$ execIn SOME_VAR cat - <${FIFOfile}
Problem 2: I get more noise to the screen
[1]+ Done printf '\n\n123\n456\n524\n789\n\n\n' | grep 2 > ${FIFOfile}
Problem 3: I loose the blanks at the start of the stream rather than at the end as I have experienced before.
So, am I doing this the right way? I am sure that there must be a way to avoid the need of a FIFO file that needs cleanup afterwards using file descriptors, but I cannot seem to do this as I cannot assign either side of the problem to a file descriptor that is not attached to a file or a FIFO file.
I can try and resolve the problems with what I have, although to make this work properly I guess I need to pre-establish a pool of FIFO files that can be pulled in to use or else I have a pre-req of establishing this file before the command. So, for many reasons this is far from ideal. If anyone can advise me of a better way you would make my day/week/month/life :)
Thanks in advance...
Process substitution was available in bash from the ancient days. You absolutely do not have a version so ancient as to be unable to use it. Thus, there's no need to use a FIFO at all:
readToVar() { IFS= read -r -d '' "$1"; }
readToVar targetVar < <(printf '\n\n123\n456\n524\n789\n\n\n')
You'll observe that:
printf '%q\n' "$targetVar"
...correctly preserves the leading newlines as well as the trailing ones.
By contrast, in a use case where you can't afford to lose stdin:
readToVar() { IFS= read -r -d '' "$1" <"$2"; }
readToVar targetVar <(printf '\n\n123\n456\n524\n789\n\n\n')
If you really want to pipe to this command, are willing to require a very modern bash, and don't mind being incompatible with job control:
set +m # disable job control
shopt -s lastpipe # in a pipeline, parent shell becomes right-hand side
readToVar() { IFS= read -r -d '' "$1"; }
printf '\n\n123\n456\n524\n789\n\n\n' | grep 2 | readToVar targetVar
The issues you claim to run into with using a FIFO do not actually exist. Put this in a script, and run it:
#!/bin/bash
trap 'rm -rf "$tempdir"' 0 # cleanup on exit
tempdir=$(mktemp -d -t fifodir.XXXXXX)
mkfifo "$tempdir/fifo"
printf '\n\n123\n456\n524\n789\n\n\n' >"$tempdir/fifo" &
IFS= read -r -d '' content <"$tempdir/fifo"
printf '%q\n' "$content" # print content to console
You'll notice that, when run in a script, there is no "noise" printed to the screen, because all that status is explicitly tied to job control, which is disabled by default in scripts.
You'll also notice that both leading and tailing newlines are correctly represented.
One idea, tell me I am crazy, might be to use the !! notation to grab the line just executed, e.g. if there is a command that can terminate a pipeline and stop it actually executing, whilst still as far as the shell is concerned, consider it as a successful execution, I am thinking something like the true command, I could then use !! to grab that line and call my existing function to execute it with process substitution or something. I could then wrap this into an alias, something like: alias streamTo=' | true ; LAST_EXEC="!!" ; myNewCommandVariation <<<' which I think could be used something like: $ cmd1 | cmd2 | myNewCommandVariation THE_VAR_NAME_TO_SET and the <<< from the alias would pass the var name to the command as an arg or stdin, either way, the command would be not at the end of a pipeline. How mad is this idea?
Not a full answer but rather a first point: is there some good reason not using mktemp for creating a new file with a random name? As far as I can see, your function called getFifo() doesn't perform much more.
mktemp -u
will give to you a free new name without creating anything; then you can use mkfifo with this name.

Resources