Unexpected behaviour running remote script - linux

I'm attempting to write a script that runs on a remote machine.
As this script is complex, I've broken parts of it down into functions that I then copy into the script using typeset.
When I run the script, I get the following error:
bash: -c: line 4: syntax error: unexpected end of file
However, there is no unexpected end of file! I ensured all ifs had fis, all { had }, etc.
I've ensured all indentation uses tabs (and spaces), and ensured new-line characters are consistent.
I've stripped the code down to a minimal example. It seems as though for a function to be valid, it needs to end in a construct such as if fi or while done.
Here's what isn't working.
func() {
ls ~/
}
ssh -A -T user#machine_ip '
'$(typeset -f func)'
func
'
Line 4 coincides with the end } of the func function (counting lines from after ssh -A -T... as this error is happening on the remote machine).
Yet, if we add a construct to the end of the function, the home directory is printed as expected.
func() {
ls ~/
if [[ 1 ]]; then
echo "Hello"
fi
}
or
func() {
ls ~/
while false; do
echo "Here"
done
}
Output of typeset -f func is
func ()
{
ls --color=auto ~/;
if [[ -n 1 ]]; then
echo "Hello";
fi
}
I'm running Ubuntu 18.04 LTS, remote machine is running Centos 6.

Because bash plays with whitespace if you let it. Let me explain:
$(typeset -f func) will evaluate typeset -f func and insert its output into the current command line. If not quoted, it will also segment into parameters, which will have as a side effect collapsing of all whitespace to a single space. Thus, if typeset -f func prints (as it does on my system)
func ()
{
/bin/ls --color=auto ~/
}
what you get with $(typeset -f func) is
func () { /bin/ls --color=auto ~/ }
(Try echo $(typeset -f func) if you don't believe me :D )
Now, bash is really bashful about accepting smushed-up code. For example, you may know that this is not grammatical:
if true then echo "yes" fi
and this is:
if true; then echo "yes"; fi
In the same way, the function definition's closing parenthesis is picky. Thus, this works:
func () { /bin/ls --color=auto ~/; }
but this doesn't:
func () { /bin/ls --color=auto ~/ }
For some reason, bash is fine with a keyword being just before the parenthesis:
func () { /bin/ls --color=auto ~/; if [[ -n 1 ]]; then echo "Hello"; fi }
func () { /bin/ls --color=auto ~/; while false; do echo "Here"; done }
To combat this... try not sending stuff from command line, which mangles your whitespace, but from redirection:
ssh -A -T user#machine_ip < <(typeset -f func; echo func)
Or, simplest of all, prevent bash mangling of whitespace using double quotes:
ssh -A -T user#machine_ip "$(typeset -f func)
func"

first of all, the way you use quotes make no sense to me.
ssh -A -T user#machine_ip '
'$(typeset -f func)'
func
'
the 2 first ' just echo an empty line. ( the 2nd ' close the 1st. logic ? )
Anyway.
$(typeset -f func)
func
the code above will execute func 2 times.
At least, thats how it works for me
....

Related

How to avoide "stdin appears to be a pipe" error in linux bash scripting [duplicate]

I'm trying to do the opposite of "Detect if stdin is a terminal or pipe?".
I'm running an application that's changing its output format because it detects a pipe on STDOUT, and I want it to think that it's an interactive terminal so that I get the same output when redirecting.
I was thinking that wrapping it in an expect script or using a proc_open() in PHP would do it, but it doesn't.
Any ideas out there?
Aha!
The script command does what we want...
script --return --quiet -c "[executable string]" /dev/null
Does the trick!
Usage:
script [options] [file]
Make a typescript of a terminal session.
Options:
-a, --append append the output
-c, --command <command> run command rather than interactive shell
-e, --return return exit code of the child process
-f, --flush run flush after each write
--force use output file even when it is a link
-q, --quiet be quiet
-t[<file>], --timing[=<file>] output timing data to stderr or to FILE
-h, --help display this help
-V, --version display version
Based on Chris' solution, I came up with the following little helper function:
faketty() {
script -qfc "$(printf "%q " "$#")" /dev/null
}
The quirky looking printf is necessary to correctly expand the script's arguments in $# while protecting possibly quoted parts of the command (see example below).
Usage:
faketty <command> <args>
Example:
$ python -c "import sys; print(sys.stdout.isatty())"
True
$ python -c "import sys; print(sys.stdout.isatty())" | cat
False
$ faketty python -c "import sys; print(sys.stdout.isatty())" | cat
True
The unbuffer script that comes with Expect should handle this ok. If not, the application may be looking at something other than what its output is connected to, eg. what the TERM environment variable is set to.
Referring previous answer, on Mac OS X, "script" can be used like below...
script -q /dev/null commands...
But, because it may replace "\n" with "\r\n" on the stdout, you may also need script like this:
script -q /dev/null commands... | perl -pe 's/\r\n/\n/g'
If there are some pipe between these commands, you need to flush stdout. for example:
script -q /dev/null commands... | ruby -ne 'print "....\n";STDOUT.flush' | perl -pe 's/\r\n/\n/g'
I don't know if it's doable from PHP, but if you really need the child process to see a TTY, you can create a PTY.
In C:
#include <stdio.h>
#include <stdlib.h>
#include <sysexits.h>
#include <unistd.h>
#include <pty.h>
int main(int argc, char **argv) {
int master;
struct winsize win = {
.ws_col = 80, .ws_row = 24,
.ws_xpixel = 480, .ws_ypixel = 192,
};
pid_t child;
if (argc < 2) {
printf("Usage: %s cmd [args...]\n", argv[0]);
exit(EX_USAGE);
}
child = forkpty(&master, NULL, NULL, &win);
if (child == -1) {
perror("forkpty failed");
exit(EX_OSERR);
}
if (child == 0) {
execvp(argv[1], argv + 1);
perror("exec failed");
exit(EX_OSERR);
}
/* now the child is attached to a real pseudo-TTY instead of a pipe,
* while the parent can use "master" much like a normal pipe */
}
I was actually under the impression that expect itself does creates a PTY, though.
Updating #A-Ron's answer to
a) work on both Linux & MacOs
b) propagate status code indirectly (since MacOs script does not support it)
faketty () {
# Create a temporary file for storing the status code
tmp=$(mktemp)
# Ensure it worked or fail with status 99
[ "$tmp" ] || return 99
# Produce a script that runs the command provided to faketty as
# arguments and stores the status code in the temporary file
cmd="$(printf '%q ' "$#")"'; echo $? > '$tmp
# Run the script through /bin/sh with fake tty
if [ "$(uname)" = "Darwin" ]; then
# MacOS
script -Fq /dev/null /bin/sh -c "$cmd"
else
script -qfc "/bin/sh -c $(printf "%q " "$cmd")" /dev/null
fi
# Ensure that the status code was written to the temporary file or
# fail with status 99
[ -s $tmp ] || return 99
# Collect the status code from the temporary file
err=$(cat $tmp)
# Remove the temporary file
rm -f $tmp
# Return the status code
return $err
}
Examples:
$ faketty false ; echo $?
1
$ faketty echo '$HOME' ; echo $?
$HOME
0
embedded_example () {
faketty perl -e 'sleep(5); print "Hello world\n"; exit(3);' > LOGFILE 2>&1 </dev/null &
pid=$!
# do something else
echo 0..
sleep 2
echo 2..
echo wait
wait $pid
status=$?
cat LOGFILE
echo Exit status: $status
}
$ embedded_example
0..
2..
wait
Hello world
Exit status: 3
Too new to comment on the specific answer, but I thought I'd followup on the faketty function posted by ingomueller-net above since it recently helped me out.
I found that this was creating a typescript file that I didn't want/need so I added /dev/null as the script target file:
function faketty { script -qfc "$(printf "%q " "$#")" /dev/null ; }
There's also a pty program included in the sample code of the book "Advanced Programming in the UNIX Environment, Second Edition"!
Here's how to compile pty on Mac OS X:
man 4 pty # pty -- pseudo terminal driver
open http://en.wikipedia.org/wiki/Pseudo_terminal
# Advanced Programming in the UNIX Environment, Second Edition
open http://www.apuebook.com
cd ~/Desktop
curl -L -O http://www.apuebook.com/src.tar.gz
tar -xzf src.tar.gz
cd apue.2e
wkdir="${HOME}/Desktop/apue.2e"
sed -E -i "" "s|^WKDIR=.*|WKDIR=${wkdir}|" ~/Desktop/apue.2e/Make.defines.macos
echo '#undef _POSIX_C_SOURCE' >> ~/Desktop/apue.2e/include/apue.h
str='#include <sys/select.h>'
printf '%s\n' H 1i "$str" . wq | ed -s calld/loop.c
str='
#undef _POSIX_C_SOURCE
#include <sys/types.h>
'
printf '%s\n' H 1i "$str" . wq | ed -s file/devrdev.c
str='
#include <sys/signal.h>
#include <sys/ioctl.h>
'
printf '%s\n' H 1i "$str" . wq | ed -s termios/winch.c
make
~/Desktop/apue.2e/pty/pty ls -ld *
I was trying to get colors when running shellcheck <file> | less on Linux, so I tried the above answers, but they produce this bizarre effect where text is horizontally offset from where it should be:
In ./all/update.sh line 6:
for repo in $(cat repos); do
^-- SC2013: To read lines rather than words, pipe/redirect to a 'while read' loop.
(For those unfamiliar with shellcheck, the line with the warning is supposed to line up with the where the problem is.)
In order to the answers above to work with shellcheck, I tried one of the options from the comments:
faketty() {
0</dev/null script -qfc "$(printf "%q " "$#")" /dev/null
}
This works. I also added --return and used long options, to make this command a little less inscrutable:
faketty() {
0</dev/null script --quiet --flush --return --command "$(printf "%q " "$#")" /dev/null
}
Works in Bash and Zsh.

Unix using grep with if

This my code
if [[ (grep -x $idle | grep -x $dead | grep -x $busy) || grep -x $idle1 | grep -x $dead | grep -x $busy1 ]] ./Event.log
then
echo "Events are running Successfully" >> ./Event.log
else
echo "One or more Events are down. Check the log and restart the Events." >> ./Event.log
fi
I'm getting the error
0403-057 Syntax error at line 14 : `-x' is not expected.
What's up?
In bash, [[ is syntactically a command which is terminated with the matching ]]. It is not part of the syntax of the if command, whose syntax starts:
if commands ; then
If you want to test whether a command succeeded or not, you just do that:
if grep -q pattern file; then
# grep found pattern in file
else
# grep did not find pattern in file
fi
Within a [[ command, bash expects to find a conditional expression, not another command. That's why grep -x ... is a syntax error. -x is a unary operator in a conditional expression, which is true if its argument is the name of an executable file, but in that expression it is being used as though it were a binary operator.
If you wish to test for more than one pattern with grep, you can use the -e option to specify each option; the grep will succeed (or select) lines matching any of the options:
if grep -q -e pattern1 -e pattern2 file; then
# grep found pattern1 or pattern2 in file
else
# grep did not find either pattern in file
fi
By a long shot, I am guessing that you want Event.log to contain one each of either member of the pairs. This could be done with something like
if awk "/^($idle|$idle1)$/ { ++idle; next }
/^($dead|$dead1)$/ { ++dead; next }
/^($busy|$busy1)$/ { ++busy; next }
idle && dead && busy { exit 0 }
END { exit 1 }' Event.log; then
echo Yes
else
echo no
fi
This collects three variables; if all of them are true, the Awk script exits with a success exit code (that's zero); otherwise, it will return failure (any nonzero value).
It would make more sense to print the result from Awk, too, but there is an awful amount of assumptions and guesswork in this answer already.

How do I indirectly assign a variable in bash to take multi-line data from both Standard In, a File, and the output of execution

I have found many snippets here and in other places that answer parts of this question. I have even managed to do this in many steps in an inefficient manner. If it is possible, I would really like to find single lines of execution that will perform this task, rather than having to assign to a variable and copy it a few times to perform the task.
e.g.
executeToVar ()
{
# Takes Arg1: NAME OF VARIABLE TO STORE IN
# All Remaining Arguments Are Executed
local STORE_INvar="${1}" ; shift
eval ${STORE_INvar}=\""$( "$#" 2>&1 )"\"
}
Overall does work, i.e. $ executeToVar SOME_VAR ls -l * # will actually fill SOME_VAR with the output of the execution of the ls -l * command that is taken from the rest of the arguments. However, if the command was to output empty lines at the end, (for e.g. - echo -e -n '\n\n123\n456\n789\n\n' which should have 2 x new lines at the start and the end ) these are stripped by bash's sub-execution process. I have seen in other posts similar to this that this has been solved by adding a token 'x' to the end of the stream, e.g. turning the sub-execution into something like:
eval ${STORE_INvar}=\""$( "$#" 2>&1 ; echo -n x )"\" # <-- ( Add echo -n x )
# and then if it wasn't an indirect reference to a var:
STORE_INvar=${STORE_INvar%x}
# However no matter how much I play with:
eval "${STORE_INvar}"=\""${STORE_INvar%x}"\"
# I am unable to indirectly remove the x from the end.
Anyway, I also need 2 x other variants on this, one that assigns the STDIN stream to the var and one that assigns the contents of a file to the var which I assume will be variations of this involving $( cat ${1} ), or maybe $( cat ${1:--} ) to give me a '-' if no filename. But, none of that will work until I can sort out the removal of the x that is needed to ensure accurate assignment of multi line variables.
I have also tried (but to no avail):
IFS='' read -d '' "${STORE_INvar}" <<<"$( $# ; echo -n x )"
eval \"'${STORE_INvar}=${!STORE_INvar%x}'\"
This is close to optimal -- but drop the eval.
executeToVar() { local varName=$1; shift; printf -v "$1" %s "$("$#")"; }
The one problem this formulation still has is that $() strips trailing newlines. If you want to prevent that, you need to add your own trailing character inside the subshell, and strip it off yourself.
executeToVar() {
local varName=$1; shift;
local val="$(printf %s x; "$#"; printf %s x)"; val=${val#x}
printf -v "$varName" %s "${val%x}"
}
If you want to read all content from stdin into a variable, this is particularly easy:
# This requires bash 4.1 for automatic fd allocation
readToVar() {
if [[ $2 && $2 != "-" ]]; then
exec {read_in_fd}<"$2" # copy from named file
else
exec {read_in_fd}<&0 # copy from stdin
fi
IFS= read -r -d '' "$1" <&$read_in_fd # read from the FD
exec {read_in_fd}<&- # close that FD
}
...used as:
readToVar var < <( : "run something here to read its output byte-for-byte" )
...or...
readToVar var filename
Testing these:
bash3-3.2$ executeToVar var printf '\n\n123\n456\n789\n\n'
bash3-3.2$ declare -p var
declare -- var="
123
456
789
"
...and...
bash4-4.3$ readToVar var2 < <(printf '\n\n123\n456\n789\n\n')
bash4-4.3$ declare -p var2
declare -- var2="
123
456
789
"
what'w wrong with storing in a file:
$ stuffToFile filename $(stuff)
where "stuffToFile" tests for a. > 1 argument, b. input on a pipe
$ ... commands ... | stuffToFile filename
and
$ stuffToFile filename < another_file
where "stoffToFile" is a function:
function stuffToFile
{
[[ -f $1 ]] || { echo $1 is not a file; return 1; }
[[ $# -lt 2 ]] && { cat - > $1; return; }
echo "$*" > $1
}
so, if "stuff" has leading and trailing blank lines, then you must:
$ stuff | stuffToFile filename

How to put bash / awk commands into a variable ( not output of commands)

I am trying to just put bash commands into a variable and later run with exit code function.
command1="$(awk -v previous_date=$reply -f $SCRIPT_HOME/tes.awk <(gzip -dc $logDir/*) > $OUTFILE)" # Step 1
check_exit $command1 # Step2
Here it runs the command in step 1 and always returns 0 exit code.
How Can I put commands in a variable and later run with exit function.
You can declare your function this way:
function command1 {
awk -v "previous_date=$reply" -f "$SCRIPT_HOME/tes.awk" <(gzip -dc "$logDir"/*) > "$OUTFILE"
}
Or the older form:
command1() {
awk -v "previous_date=$reply" -f "$SCRIPT_HOME/tes.awk" <(gzip -dc "$logDir"/*) > "$OUTFILE"
}
Take attention on placing variables inside double quotes to prevent word splitting and pathname expansion.
Some POSIXists would prefer that you use the old version for compatibility but if you're just running your code with bash, everything can just be a preference.
And you can also have another function like checkexit which would check the exit code of the command:
function checkexit {
"$#"
echo "Command returned $?"
}
Example run:
checkexit command1
Try using a function:
do_stuff() { echo "$var"; echo that; }
You can later call this function and check the error code:
$ var="this"
$ do_stuff
this
that
$ echo $?
0
The variables defined in the global scope of your script will be visible to the function.
So for your example:
command1() { awk -v previous_date="$reply" -f "$SCRIPT_HOME/tes.awk" <(gzip -dc "$logDir/*") > "$OUTFILE"); }
check_exit command1
By the way, I put some quotes around the variables in your function, as it's typically a good idea in case their values contain spaces.

about the Linux Shell 'While' command

code:
path=$PATH:
while [ -n $path ]
do
ls -ld ${path%%:*}
path=${path#*:}
done
I want to get the each part of path .When run the script ,it can not get out of the while process 。Please tell me why . Is some problem in 'while [ -n $path ]' ?
The final cut never results in an empty string. If you have a:b:c, you'll strip off the a and then the b, but never the c. I.e., this:
${path#*:}
Will always result in a non-empty string for the last piece of the path. Since the -n check looks for an empty string, your loop runs forever.
If $path doesn't have a colon in it, ${path#*:} will return $path. So you have an infinite loop.
p="foo"
$ echo ${p#*:}
foo
$ p="foo:bar"
$ echo ${p#*:}
bar
You have some bugs in your code. This should do the trick:
path=$PATH
while [[ $path != '' ]]; do
# you can replace echo to whatever you need, like ls -ld
echo ${path%%:*}
if echo $path | grep ':' >/dev/null; then
path=${path#*:}
else path=''
fi
done
Your path, after is initialized, will always check True for [ -n path ] test. This is the main reason for which you never get out of the while loop.

Resources