How do I execute a "runuser" command, with arguments, in a bash script? - linux

I have a niche requirement to run commands stored in another config file within a wrapper script for another utility. My wrapper script (below) works for every command in that config file that does not use "runuser" and also include arguments. If a command uses runuser and my "-c" command includes arguments, the script fails.
The wrapper script
#!/bin/bash
nagios_cmd=$(grep $1 /etc/nagios/nrpe.cfg | awk -F "=" {'print $2'})
exec=$($nagios_cmd)
if [ $? -eq 0 ]
then
#exitok
echo $exec
exit 0
else
#exitcritical
echo $exec
exit 1001
fi
The config file
command[check_crsdb_state]=sudo /usr/lib64/nagios/plugins/check_crsdb_state
command[check_crsasm_state]=sudo /usr/lib64/nagios/plugins/check_crsasm_state
command[check_ora1_tablespace_apex]=sudo /usr/sbin/runuser -l oracle -c '/check_oracle_tablespace APEX 32000'
command[check_ora1_tablespace_lob1]=sudo /usr/sbin/runuser -l oracle -c '/check_oracle_tablespace LOB1 32000'
Successful Script Run
[root#quo-mai-ora1 /]# ./rmmwrapper.sh check_crsasm_state
OK - All nodes report 'Started,STABLE'
[root#quo-mai-ora1 /]#
Failure Script Run
[root#quo-mai-ora1 /]# ./rmmwrapper.sh check_ora1_tablespace_apex
APEX: -c: line 0: unexpected EOF while looking for matching `''
APEX: -c: line 1: syntax error: unexpected end of file
[root#quo-mai-ora1 /]#
Failure Script Run (with bash -x)
[root#quo-mai-ora1 /]# bash -x ./rmmwrapper.sh check_ora1_tablespace_apex
++ grep check_ora1_tablespace_apex /etc/nagios/nrpe.cfg
++ awk -F = '{print $2}'
+ nagios_cmd='sudo /usr/sbin/runuser -l oracle -c '\''/check_oracle_tablespace APEX 32000'\'''
++ sudo /usr/sbin/runuser -l oracle -c ''\''/check_oracle_tablespace' APEX '32000'\'''
APEX: -c: line 0: unexpected EOF while looking for matching `''
APEX: -c: line 1: syntax error: unexpected end of file
+ exec=
+ '[' 1 -eq 0 ']'
+ echo
+ exit 1001
[root#quo-mai-ora1 /]#
The Problem
You can see in the bash -x output, that for some reason when the $nagios_cmd gets executed, it places single quotes before the spaces separate multiple args that are supplied to that resulting script (/check_oracle_tablespace). I've tried different ways of executing $nagios_cmd (using backticks instead etc. I've tried escaping the space characters by modifying the config file to look like this:
command[check_ora1_tablespace_apex]=sudo /usr/sbin/runuser -l oracle -c '/check_oracle_tablespace\ APEX\ 32000'
I've also tried encapsulating the command after -c on runuser in double quotes instead of single, or just no quotes at all.
I'm clearly missing something fundamentally wrong with bash. How can I get the script to just execute the contents of $nagios_cmd as it appears in plain text?

This looks like one of those rare cases where eval is actually the right answer. Try this:
exec=$(eval "$nagios_cmd")
Explanation: bash doesn't expand variables until fairly late in the process of parsing commands, so the string in the variable isn't parsed like it would be if it were actually part of the command. In this case, the problem is that it's expanded after quotes and escapes have been parsed, so it's too late for the single-quotes around the multi-word command to have their intended effect. See BashFAQ #50: "I'm trying to put a command in a variable, but the complex cases always fail!"
What eval does is essentially re-run the entire command parsing process from the beginning. So the variable gets expanded out to the command you want to run (including quotes, etc), and that gets parsed like it would be normally.
Note that I did put double-quotes around the variable; that's so it doesn't go through the partial-parsing process that is done to unquoted variable references, and then through the full parsing process. This one-and-a-half-times parsing process can have rare but really weird effects, so it's best avoided.
Also: eval has a well-deserved bad reputation (I've used the phrase "massive bug magnet" to describe it). This is because what it fundamentally does is treat data (e.g. the contents of variables) as executable code, so it's easy to find that you're doing things like executing parts of your filenames as commands. But in this case, your data is supposed to be a command, and is (hopefully) trusted not to contain malicious, invalid, etc content. In this case, you want the data to be treated as executable code.

Related

File redirection fails in Bash script, but not Bash terminal

I am having a problem where cmd1 works, but not cmd2 in my Bash script ending in .sh. I have made the Bash script executable.
Additionally, I can execute cmd2 just fine from my Bash terminal. I have tried to make a minimally reproducible example, but my larger goal is to run a complicated executable with command line arguments and pass output to a file that may or may not exist (rather than displaying the output in the terminal).
Replacing > with >> also gives the same error in the script, but not the terminal.
My Bash script:
#!/bin/bash
cmd1="cat test.txt"
cmd2="cat test.txt > a"
echo $cmd1
$cmd1
echo $cmd2
$cmd2
test.txt has the words "dog" and "cat" on two separate lines without quotes.
Short answer: see BashFAQ #50: I'm trying to put a command in a variable, but the complex cases always fail!.
Long answer: the shell expands variable references (like $cmd1) toward the end of the process of parsing a command line, after it's done parsing redirects (like > a is supposed to be) and quotes and escapes and... In fact, the only thing it does with the expanded value is word splitting (e.g. treating cat test.txt > a as "cat" followed by "test.txt", ">", and finally "a", rather than a single string) and wildcard expansion (e.g. if $cmd expanded to cat *.txt, it'd replace the *.txt part with a list of matching files). (And it skips word splitting and wildcard expansion if the variable is in double-quotes.)
Partly as a result of this, the best way to store commands in variables is: don't. That's not what they're for; variables are for data, not commands. What you should do instead, though, depends on why you were storing the command in a variable.
If there's no real reason to store the command in a variable, then just use the command directly. For conditional redirects, just use a standard if statement:
if [ -f a ]; then
cat test.txt > a
else
cat test.txt
fi
If you need to define the command at one point, and use it later; or want to use the same command over and over without having to write it out in full each time, use a function:
cmd2() {
cat test.txt > a
}
cmd2
It sounds like you may need to be able to define the command differently depending on some condition, you can actually do that with a function as well:
if [ -f a ]; then
cmd() {
cat test.txt > a
}
else
cmd() {
cat test.txt
}
fi
cmd
Alternately, you can wrap the command (without redirect) in a function, then use a conditional to control whether it redirects:
cmd() {
cat test.txt
}
if [ -f a ]; then
cmd > a
else
cmd
fi
It's also possible to wrap a conditional redirect into a function itself, then pipe output to it:
maybe_redirect_to() {
if [ -f "$1" ]; then
cat > "$1"
else
cat
fi
}
cat test.txt | maybe_redirect_to a
(This creates an extra cat process that isn't really doing anything useful, but if it makes the script cleaner, I'd consider that worth it. In this particular case, you could minimize the stray cats by using maybe_redirect_to a < test.txt.)
As a last resort, you can store the command string in a variable, and use eval to parse it. eval basically re-runs the shell parsing process from the beginning, meaning that it'll recognize things like redirects in the string. But eval has a well-deserved reputation as a bug magnet, because it's easy for it to treat parts of the string you thought were just data as command syntax, which can cause some really weird (& dangerous) bugs.
If you must use eval, at least double-quote the variable reference, so it runs through the parsing process just once, rather than sort-of-once-and-a-half as it would unquoted. Here's an example of what I mean:
cmd3="echo '5 * 3 = 15'"
eval "$cmd3"
# prints: 5 * 3 = 15
eval $cmd3
# prints: 5 [list of files in the current directory] 3 = 15
# ...unless there are any files with shell metacharacters in their names, in
# which case something more complicated might happen.
BashFAQ #50 discusses some other possible reasons and solutions. Note that the array approach will not work here, since arrays also get expanded after redirects are parsed.
If you pop an 'eval' in front of $cmd2 it should work as expected:
#!/bin/bash
cmd2="cat test.txt > a"
eval $cmd2
If you're not sure about the operation of a script you could always use the debug mode to see if you can determine the error.
bash -x scriptname
This will run the command and display the output of variable evaluations. Hopefully this will reveal any issues with syntax.

Bash command line arguments passed to sed via ssh

I am looking to write a simple script to perform a SSH command on many hosts simultaneously, and which hosts exactly are generated from another script. The problem is that when I run the script using sometihng like sed it doesn't work properly.
It should run like sshall.sh {anything here} and it will run the {anything here} part on all the nodes in the list.
sshall.sh
#!/bin/bash
NODES=`listNodes | grep "node-[0-9*]" -o`
echo "Connecting to all nodes and running: ${#:1}"
for i in $NODES
do
:
echo "$i : Begin"
echo "----------------------------------------"
ssh -q -o "StrictHostKeyChecking no" $i "${#:1}"
echo "----------------------------------------"
echo "$i : Complete";
echo ""
done
When it is run with something like whoami it works but when I run:
[root#myhost bin]# sshall.sh sed -i '/^somebeginning/ s/$/,appendme/' /etc/myconfig.conf
Connecting to all nodes and running: sed -i /^somebeginning/ s/$/,appendme/ /etc/myconfig.conf
node-1 : Begin
----------------------------------------
sed: -e expression #1, char 18: missing command
----------------------------------------
node-1 : Complete
node-2 : Begin
----------------------------------------
sed: -e expression #1, char 18: missing command
----------------------------------------
node-2 : Complete
…
Notice that the quotes disappear on the sed command when sent to the remote client.
How do I go about fixing my bash command?
Is there a better way of achieving this?
Substitute an eval-safe quoted version of your command into a heredoc:
#!/bin/bash
# ^^^^- not /bin/sh; printf %q is an extension
# Put your command into a single string, with each argument quoted to be eval-safe
printf -v cmd_q '%q ' "$#"
while IFS= read -r hostname; do
# run bash -s remotely, with that string passed on stdin
ssh -q -o 'StrictHostKeyChecking no' "$hostname" "bash -s" <<EOF
$cmd_q
EOF
done < <(listNodes | grep -o -e "node-[0-9*]")
Why this works reliably (and other approaches don't):
printf %q knows how to quote contents to be eval'd by that same shell (so spaces, wildcards, various local quoting methods, etc. will always be supported).
Arguments given to ssh are not passed to the remote command individually!
Instead, they're concatenated into a string passed to sh -c.
However: The output of printf %q is not portable to all POSIX-derived shells! It's guaranteed to be compatible with the same shell locally in use -- ksh will always parse output from printf '%q' in ksh, bash will parse output from printf '%q' in bash, etc; thus, you can't safely pass this string on the remote argument vector, because it's /bin/sh -- not bash -- running there. (If you know your remote /bin/sh is provided by bash, then you can run ssh "$hostname" "$cmd_q" safely, but only under this condition).
bash -s reads the script to run from stdin, meaning that passing your command there -- not on the argument vector -- ensures that it'll be parsed into arguments by the same shell that escaped it to be shell-safe.
You want to pass the entire command -- with all of its arguments, spaces, and quotation marks -- to ssh so it can pass it unchanged to the remote shell for parsing.
One way to do that is to put it all inside single quotation marks. But then you'll also need to make sure the single quotation marks within your command are preserved in the arguments, so the remote shell builds the correct arguments for sed.
sshall.sh 'sed -i '"'"'/^somebeginning/ s/$/,appendme/'"'"' /etc/myconfig.conf'
It looks redundant, but '"'"' is a common Bourne trick to get a single quotation mark into a single-quoted string. The first quote ends single-quoting temporarily, the double-quote-single-quote-double-quote construct appends a single quotation mark, and then the single quotation mark resumes your single-quoted section. So to speak.
Another trick that can be helpful for troubleshooting is to add the -v flag do your ssh flags, which will spit out lots of text, but most importantly it will show you exactly what string it's passing to the remote shell for parsing and execution.
--
All of this is fairly fragile around spaces in your arguments, which you'll need to avoid, since you're relying on shell parsing on the opposite end.
Thinking outside the box: instead of dealing with all the quoting issues and the word-splitting in the wrong places, you could attempt to a) construct the script locally (maybe use a here-document?), b) scp the script to the remote end, then c) invoke it there. This easily allows more complex command sequences, with all the power of shell control constructs etc. Debugging (checking proper quoting) would be a breeze by simply looking at the generated script.
I recommend reading the command(s) from the standard input rather than from the command line arguments:
cmd.sh
#!/bin/bash -
# Load server_list with user#host "words" here.
cmd=$(</dev/stdin)
for h in ${server_list[*]}; do
ssh "$h" "$cmd"
done
Usage:
./cmd.sh <<'CMD'
sed -i '/^somebeginning/ s/$/,appendme/' /path/to/file1
# other commands
# here...
CMD
Alternatively, run ./cmd.sh, type the command(s), then press Ctrl-D.
I find the latter variant the most convenient, as you don't even need for here documents, no need for extra escaping. Just invoke your script, type the commands, and press the shortcut. What could be easier?
Explanations
The problem with your approach is that the quotes are stripped from the arguments by the shell. For example, the argument '/^somebeginning/ s/$/,appendme/' will be interpreted as /^somebeginning/ s/$/,appendme/ string (without the single quotes), which is an invalid argument for sed.
Of course, you can escape the command with the built-in printf as suggested in other answer here. But the command becomes not very readable after escaping. For example
printf %q 'sed -i /^somebeginning/ s/$/,appendme/ /home/ruslan/tmp/file1.txt'
produces
sed\ -i\ /\^somebeginning/\ s/\$/\,appendme/\ /home/ruslan/tmp/file1.txt
which is not very readable, and will look ugly, if you print it to the screen in order to show the progress.
That's why I prefer to read from the standard input and leave the command intact. My script prints the command strings to the screen, and I see them just in the form I have written them.
Note, the for .. in loop iterates $IFS-separated "words", and is generally not preferred way to traverse an array. It is generally better to invoke read -r in a while loop with adjusted $IFS. I have used the for loop for simplicity, as the question is really about invoking the ssh command.
Logging into multiple systems over SSH and using the same (or variations on the same) command is the basic use case behind ansible. The system is not without significant flaws, but for simple use cases is pretty great. If you want a more solid solution without too much faffing about with escaping and looping over hosts, take a look.
Ansible has a 'raw' module which doesn't even require any dependencies on the target hosts, and you might find that a very simple way to achieve this sort of functionality in a way that frees you from the considerations of looping over hosts, handling errors, marshalling the commands, etc and lets you focus on what you're actually trying to achieve.

Ubuntu Bash Script executing a command with spaces

I have a bit of an issue and i've tried several ways to fix this but i can't seem to.
So i have two shell scripts.
background.sh: This runs a given command in the background and redirect's output.
#!/bin/bash
if test -t 1; then
exec 1>/dev/null
fi
if test -t 2; then
exec 2>/dev/null
fi
"$#" &
main.sh: This file simply starts the emulator (genymotion) as a background process.
#!/bin/bash
GENY_DIR="/home/user/Documents/MyScript/watchdog/genymotion"
BK="$GENY_DIR/background.sh"
DEVICE="164e959b-0e15-443f-b1fd-26d101edb4a5"
CMD="$BK player --vm-name $DEVICE"
$CMD
This works fine when i have NO spaces in my directory. However, when i try to do: GENY_DIR="home/user/Documents/My Script/watchdog/genymotion"
which i have no choice at the moment. I get an error saying that the file or directory cannot be found. I tried to put "$CMD" in quote but it didn't work.
You can test this by trying to run anything as a background process, doesn't have to be an emulator.
Any advice or feedback would be appreciated. I also tried to do.
BK="'$BK'"
or
BK="\"$BK\""
or
BK=$( echo "$BK" | sed 's/ /\\ /g' )
Don't try to store commands in strings. Use arrays instead:
#!/bin/bash
GENY_DIR="$HOME/Documents/My Script/watchdog/genymotion"
BK="$GENY_DIR/background.sh"
DEVICE="164e959b-0e15-443f-b1fd-26d101edb4a5"
CMD=( "$BK" "player" --vm-name "$DEVICE" )
"${CMD[#]}"
Arrays properly preserve your word boundaries, so that one argument with spaces remains one argument with spaces.
Due to the way word splitting works, adding a literal backslash in front of or quotes around the space will not have a useful effect.
John1024 suggests a good source for additional reading: I'm trying to put a command in a variable, but the complex cases always fail!
try this:
GENY_DIR="home/user/Documents/My\ Script/watchdog/genymotion"
You can escape the space with a backslash.

how to pass asterisk into ls command inside bash script

Hi… Need a little help here…
I tried to emulate the DOS' dir command in Linux using bash script. Basically it's just a wrapped ls command with some parameters plus summary info. Here's the script:
#!/bin/bash
# default to current folder
if [ -z "$1" ]; then var=.;
else var="$1"; fi
# check file existence
if [ -a "$var" ]; then
# list contents with color, folder first
CMD="ls -lgG $var --color --group-directories-first"; $CMD;
# sum all files size
size=$(ls -lgGp "$var" | grep -v / | awk '{ sum += $3 }; END { print sum }')
if [ "$size" == "" ]; then size="0"; fi
# create summary
if [ -d "$var" ]; then
folder=$(find $var/* -maxdepth 0 -type d | wc -l)
file=$(find $var/* -maxdepth 0 -type f | wc -l)
echo "Found: $folder folders "
echo " $file files $size bytes"
fi
# error message
else
echo "dir: Error \"$var\": No such file or directory"
fi
The problem is when the argument contains an asterisk (*), the ls within the script acts differently compare to the direct ls command given at the prompt. Instead of return the whole files list, the script only returns the first file. See the video below to see the comparation in action. I don't know why it behaves like that.
Anyone knows how to fix it? Thank you.
Video: problem in action
UPDATE:
The problem has been solved. Thank you all for the answers. Now my script works as expected. See the video here: http://i.giphy.com/3o8dp1YLz4fIyCbOAU.gif
The asterisk * is expanded by the shell when it parses the command line. In other words, your script doesn't get a parameter containing an asterisk, it gets a list of files as arguments. Your script only works with $1, the first argument. It should work with "$#" instead.
This is because when you retrieve $1 you assume the shell does NOT expand *.
In fact, when * (or other glob) matches, it is expanded, and broken into segments by $IFS, and then passed as $1, $2, etc.
You're lucky if you simply retrieved the first file. When your first file's path contains spaces, you'll get an error because you only get the first segment before the space.
Seriously, read this and especially this. Really.
And please don't do things like
CMD=whatever you get from user input; $CMD;
You are begging for trouble. Don't execute arbitrary string from the user.
Both above answers already answered your question. So, i'm going a bit more verbose.
In your terminal is running the bash interpreter (probably). This is the program which parses your input line(s) and doing "things" based on your input.
When you enter some line the bash start doing the following workflow:
parsing and lexical analysis
expansion
brace expansion
tidle expansion
variable expansion
artithmetic and other substitutions
command substitution
word splitting
filename generation (globbing)
removing quotes
Only after all above the bash
will execute some external commands, like ls or dir.sh... etc.,
or will do so some "internal" actions for the known keywords and builtins like echo, for, if etc...
As you can see, the second last is the filename generation (globbing). So, in your case - if the test* matches some files, your bash expands the willcard characters (aka does the globbing).
So,
when you enter dir.sh test*,
and the test* matches some files
the bash does the expansion first
and after will execute the command dir.sh with already expanded filenames
e.g. the script get executed (in your case) as: dir.sh test.pas test.swift
BTW, it acts exactly with the same way for your ls example:
the bash expands the ls test* to ls test.pas test.swift
then executes the ls with the above two arguments
and the ls will print the result for the got two arguments.
with other words, the ls don't even see the test* argument - if it is possible - the bash expands the wilcard characters. (* and ?).
Now back to your script: add after the shebang the following line:
echo "the $0 got this arguments: $#"
and you will immediatelly see, the real argumemts how your script got executed.
also, in such cases is a good practice trying to execute the script in debug-mode, e.g.
bash -x dir.sh test*
and you will see, what the script does exactly.
Also, you can do the same for your current interpreter, e.g. just enter into the terminal
set -x
and try run the dir.sh test* = and you will see, how the bash will execute the dir.sh command. (to stop the debug mode, just enter set +x)
Everbody is giving you valuable advice which you should definitely should follow!
But here is the real answer to your question.
To pass unexpanded arguments to any executable you need to single quote them:
./your_script '*'
The best solution I have is to use the eval command, in this way:
#!/bin/bash
cmd="some command \"with_quetes_and_asterisk_in_it*\""
echo "$cmd"
eval $cmd
The eval command takes its arguments and evaluates them into the command as the shell does.
This solves my problem when I need to call a command with asterisk '*' in it from a script.

Bash printf %q invalid directive

I want to change my PS1 in my .bashrc file.
I've found a script using printf with %q directive to escape characters :
#!/bin/bash
STR=$(printf "%q" "PS1=\u#\h:\w\$ ")
sed -i '/PS1/c\'"$STR" ~/.bashrc
The problem is that I get this error :
script.sh: 2: printf: %q: invalid directive
Any idea ? Maybe an other way to escape the characters ?
The printf command is built into bash. It's also an external command, typically installed in /usr/bin/printf. On most Linux systems, /usr/bin/printf is the GNU coreutils implementation.
Older releases of the GNU coreutils printf command do not support the %q format specifier; it was introduced in version 8.25, released 2016-10-20. bash's built-in printf command does -- and has as long as bash has had a built-in printf command.
The error message implies that you're running script.sh using something other than bash.
Since the #!/bin/bash line appears to be correct, you're probably doing one of the following:
sh script.sh
. script.sh
source script.sh
Instead, just execute it directly (after making sure it has execute permission, using chmod +x if needed):
./script.sh
Or you could just edit your .bashrc file manually. The script, if executed correctly, will add this line to your .bashrc:
PS1=\\u#\\h:\\w\$\
(The space at the end of that line is significant.) Or you can do it more simply like this:
PS1='\u#\h:\w\$ '
One problem with the script is that it will replace every line that mentions PS1. If you just set it once and otherwise don't refer to it, that's fine, but if you have something like:
if [ ... ] ; then
PS1=this
else
PS1=that
fi
then the script will thoroughly mess that up. It's just a bit too clever.
Keith Thompson has given good advice in his answer. But FWIW, you can force bash to use a builtin command by preceding the command name with builtin eg
builtin printf "%q" "PS1=\u#\h:\w\$ "
Conversely,
command printf "%s\n" some stuff
forces bash to use the external command (if it can find one).
command can be used to invoke commands on disk when a function with the same name exists. However, command does not invoke a command on disk in lieu of a Bash built-in with the same name, it only works to suppress invocation of a shell function. (Thanks to Rockallite for bringing this error to my attention).
It's possible to enable or disable specific bash builtins (maybe your .bashrc is doing that to printf). See help enable for details. And I guess I should mention that you can use
type printf
to find out what kind of entity (shell function, builtin, or external command) bash will run when you give it a naked printf. You can get a list of all commands with a given name by passing type the -a option, eg
type -a printf
You can use grep to see the lines in your .bashrc file that contain PS1:
grep 'PS1' ~/.bashrc
or
grep -n0 --color=auto 'PS1=' ~/.bashrc
which gives you line numbers and fancy coloured output. And then you can use the line number to force sed to just modify the line you want changed.
Eg, if grep tells you that the line you want to change is line 7, you can do
sed -i '7c\'"$STR" ~/.bashrc
to edit it. Or even better,
sed -i~ '7c\'"$STR" ~/.bashrc
which backs up the original version of the file in case you make a mistake.
When using sed -i I generally do a test run first without the -i so that the output goes to the shell, to let me see what the modifications do before I write them to the file.

Resources