Expect not working as expected - linux

EDIT: So ultimately what I ended up doing here is running it straight in a mysql.expect script. Any variables that need to be updated would be replaced via sed in standard bash script used to launch mysql.expect.
I have a bash script that runs expect, and automates the MySQL installation process, as you can see here. The reason expect needs to be called in this script is to source local bash variables, so I can't just run it via expect, but rather it needs to be called it as follows:
if [ catch "spawn /bin/bash ./mysql.sh" error ] {
puts "Could not spawn mysql.sh $error"
}
I know this works, because there's another script I have called "test.sh" that does the following:
#!/bin/bash
source ./myvars.rc
echo "CONNECTED" >> "./out.html"
echo "$MYVARIABLE" >> "./out.html"
This works fine, the variable is correctly added to out.html. The mysql.sh script works when called directly, but not through expect, and there are no errors. Any ideas? Thanks.

I'm not an expect expert, but you may have a syntax error in the spawn command. This seems to work:
#!/usr/bin/expect
if { [ catch {spawn /bin/bash ./mysql.sh} error ] } {
puts "Could not spawn mysql.sh $error"
}
# This is the important part
interact
catch returns 0 (OK) if your command succeeds. You were seeing "success" because it really errored out, but you were testing the other condition.
So, I did some more testing and updated that a bit. You don't want the ! there, but you do want the interact. Once you spawn something with expect, you want to either use expect to process the output, or if there is none to process, just let the script run with interact.
The interesting thing is that if you use /bin/bash ./mysql.sh without interact, this will just do nothing but not actually run the script. If you use just ./mysql.sh, it will hang. I assume there is something with standard in/out that happens differently between the two.
Anyway, I hope this helps and actually solve your problem. Extra added stuff because I'm geeking out here -- you probably want exec instead of spawn if you don't want to interact with your script:
#!/usr/bin/expect
if { [ catch {puts [exec /bin/bash ./mysql.sh]} error ] } {
puts "Could not spawn mysql.sh $error"
}
The puts is there because otherwise you will lose the output of the script. If you don't care about the output, you can use:
#!/usr/bin/expect
if { [ catch {exec /bin/bash ./mysql.sh} error ] } {
puts "Could not spawn mysql.sh $error"
}

Related

Linux command detecting commands before writing to a file

I'm looking for some command in Linux Shell script which will detect the execution status of a command before writing it into an other file. The code I have consist of a set of commands and after that it log into another with ">" sign. But I want to read the command execution status before ">" sign. Can anyone knows how to do it?
There are couple of things you could do. First you can run the bash with -v option. This will log the command before executing. Also, you can use ERR trap in bash to figure out the exit status of the command.
Example:
-> ./err_return_traping.sh
Command returns 100
./err_return_traping.sh:13 exited with 100
-> cat err_return_traping.sh
#!/bin/bash
log_failure() {
declare rs=$?
echo "$0:$1 exited with $rs"
}
trap 'log_failure $LINENO' ERR
fail_command() {
echo "Command returns 100"
return 100
}
fail_command

Can't redirect interactive shell's output to file with a script

I trying to write simple output logger. And it's just refuse to work. I can swear, it worked once and it was beautiful.
It's practice, so I don't want to use pre-build bash tools. (like script)
Code:
#!/bin/bash
# create_log.sh
exec 6>&1
exec &> log
s
a=0
while true
do
sleep 1
echo love
((a++))
if [ "$a" -eq 1000 ]
then
break
fi
done
exec 1>&6 6>&-
echo "Stopped doing love"
I run this script in console . /create_log.sh &
And as long as the cycle turns, stdout and stderr should be redirected to log file. But they simply doesn't.
Log file full of love, but I simply can not get date. (or any other output from console)
P.S. If I just type exec > log in console it's work perfectly.
An approach needs to be run natively in the shell for which you intend to redirect output, not in any subprocess of that shell. Running anything with a & as the command separating it from the next command puts it in a subprocess, rather than running in the shell itself.
Consider this pair of functions (for bash 4.1 or newer):
# for this example, consider this content to belong to file-with-functions.bash
start_redir() {
exec {orig_stdout}>&1
exec > >(tee log >&$orig_stdout)
}
end_redir() {
[[ $orig_stdout ]] || {
echo "Not redirected with start_redir previously" >&2
return 1
}
exec 1>&$orig_stdout
exec {orig_stdout}>&-
}
...this can be used as follows:
. ./file-with-functions.bash # source these functions into the current shell; no &
start_redir
ls
end_redir
You can put these functions in a file that you source, but that sourcing needs to be done in the foreground, as putting anything in the background makes it happen in a subprocess, not the shell you're using itself.

Bash output happening after prompt, not before, meaning I have to manually press enter

I am having a problem getting bash to do exactly what I want, it's not a major issue, but annoying.
1.) I have a third party software I run that produces some output as stderr. Some of it is useful, some of it is regularly stuff I don't care about and I don't want this dumped to screen, however I do want the useful parts of the stderr dumped to screen. I figured the best way to achieve this was to pass stderr to a function, then use conditions in that function to either show the stderr or not.
2.) This works fine. However the solution I have implemented dumped out my errors at the right time, but then returns a bash prompt and I want to summarise the status of the errors at the end of the function, but echo-ing here prints the text after the prompt meaning that I have to press enter to get back to a clean prompt. It shall become clear with the example below.
My error stream generator:
./TestErrorStream.sh
#!/bin/bash
echo "test1" >&2
My function to process this:
./Function.sh
#!/bin/bash
function ProcessErrors()
{
while read data;
do
echo Line was:"$data"
done
sleep 5 # This is used simply to simulate the processing work I'm doing on the errors.
echo "Completed"
}
I source the Function.sh file to make ProcessErrors() available, then I run:
2> >(ProcessErrors) ./TestErrorStream.sh
I expect (and want) to get:
user#user-desktop:~/path$ 2> >(ProcessErrors) ./TestErrorStream.sh
Line was:test1
Completed
user#user-desktop:~/path$
However what I really get is:
user#user-desktop:~/path$ 2> >(ProcessErrors) ./TestErrorStream.sh
Line was:test1
user#user-desktop:~/path$ Completed
And no clean prompt. Of course the prompt is there, but "Completed" is being printed after the prompt, I want to printed before, and then a clean prompt to appear.
NOTE: This is a minimum working example, and it's contrived. While other solutions to my error stream problem are welcome I also want to understand how to make bash run this script the way I want it to.
Thanks for your help
Joey
Your problem is that the while loop stay stick to stdin until the program exits.
The release of stdin occurs at the end of the "TestErrorStream.sh", so your prompt is almost immediately available compared to what remains to process in the function.
I suggest you wrap the command inside a script so you'll be able to handle the time you want before your prompt is back (I suggest 1sec more than the suspected time needed for the function to process the remaining lines of codes)
I successfully managed to do this like that :
./Functions.sh
#!/bin/bash
function ProcessErrors()
{
while read data;
do
echo Line was:"$data"
done
sleep 5 # simulate required time to process end of function (after TestErrorStream.sh is over and stdin is released)
echo "Completed"
}
./TestErrorStream.sh
#!/bin/bash
echo "first"
echo "firsterr" >&2
sleep 20 # any number here
./WrapTestErrorStream.sh
#!/bin/bash
source ./Functions.sh
2> >(ProcessErrors) ./TestErrorStream.sh
sleep 6 # <= this one is important
With the above you'll get a nice "Completed" before your prompt after 26 seconds of processing. (Works fine with or without the additional "time" command)
user#host:~/path$ time ./WrapTestErrorStream.sh
first
Line was:firsterr
Completed
real 0m26.014s
user 0m0.000s
sys 0m0.000s
user#host:~/path$
Note: the process substitution ">(ProcessErrors)" is a subprocess of the script "./TestErrorStream.sh". So when the script ends, the subprocess is no more tied to it nor to the wrapper. That's why we need that final "sleep 6"
#!/bin/bash
function ProcessErrors {
while read data; do
echo Line was:"$data"
done
sleep 5
echo "Completed"
}
# Open subprocess
exec 60> >(ProcessErrors)
P=$!
# Do the work
2>&60 ./TestErrorStream.sh
# Close connection or else subprocess would keep on reading
exec 60>&-
# Wait for process to exit (wait "$P" doesn't work). There are many ways
# to do this too like checking `/proc`. I prefer the `kill` method as
# it's more explicit. We'd never know if /proc updates itself quickly
# among all systems. And using an external tool is also a big NO.
while kill -s 0 "$P" &>/dev/null; do
sleep 1s
done
Off topic side-note: I'd love to see how posturing bash veterans/authors try to own this. Or perhaps they already did way way back from seeing this.

Any way to exit bash script, but not quitting the terminal

When I use exit command in a shell script, the script will terminate the terminal (the prompt). Is there any way to terminate a script and then staying in the terminal?
My script run.sh is expected to execute by directly being sourced, or sourced from another script.
EDIT:
To be more specific, there are two scripts run2.sh as
...
. run.sh
echo "place A"
...
and run.sh as
...
exit
...
when I run it by . run2.sh, and if it hit exit codeline in run.sh, I want it to stop to the terminal and stay there. But using exit, the whole terminal gets closed.
PS: I have tried to use return, but echo codeline will still gets executed....
The "problem" really is that you're sourcing and not executing the script. When you source a file, its contents will be executed in the current shell, instead of spawning a subshell. So everything, including exit, will affect the current shell.
Instead of using exit, you will want to use return.
Yes; you can use return instead of exit. Its main purpose is to return from a shell function, but if you use it within a source-d script, it returns from that script.
As §4.1 "Bourne Shell Builtins" of the Bash Reference Manual puts it:
return [n]
Cause a shell function to exit with the return value n.
If n is not supplied, the return value is the exit status of the
last command executed in the function.
This may also be used to terminate execution of a script being executed
with the . (or source) builtin, returning either n or
the exit status of the last command executed within the script as the exit
status of the script.
Any command associated with the RETURN trap is executed
before execution resumes after the function or script.
The return status is non-zero if return is used outside a function
and not during the execution of a script by . or source.
You can add an extra exit command after the return statement/command so that it works for both, executing the script from the command line and sourcing from the terminal.
Example exit code in the script:
if [ $# -lt 2 ]; then
echo "Needs at least two arguments"
return 1 2>/dev/null
exit 1
fi
The line with the exit command will not be called when you source the script after the return command.
When you execute the script, return command gives an error. So, we suppress the error message by forwarding it to /dev/null.
Instead of running the script using . run2.sh, you can run it using sh run2.sh or bash run2.sh
A new sub-shell will be started, to run the script then, it will be closed at the end of the script leaving the other shell opened.
Actually, I think you might be confused by how you should run a script.
If you use sh to run a script, say, sh ./run2.sh, even if the embedded script ends with exit, your terminal window will still remain.
However if you use . or source, your terminal window will exit/close as well when subscript ends.
for more detail, please refer to What is the difference between using sh and source?
This is just like you put a run function inside your script run2.sh.
You use exit code inside run while source your run2.sh file in the bash tty.
If the give the run function its power to exit your script and give the run2.sh
its power to exit the terminator.
Then of cuz the run function has power to exit your teminator.
#! /bin/sh
# use . run2.sh
run()
{
echo "this is run"
#return 0
exit 0
}
echo "this is begin"
run
echo "this is end"
Anyway, I approve with Kaz it's a design problem.
I had the same problem and from the answers above and from what I understood what worked for me ultimately was:
Have a shebang line that invokes the intended script, for example,
#!/bin/bash uses bash to execute the script
I have scripts with both kinds of shebang's. Because of this, using sh or . was not reliable, as it lead to a mis-execution (like when the script bails out having run incompletely)
The answer therefore, was
Make sure the script has a shebang, so that there is no doubt about its intended handler.
chmod the .sh file so that it can be executed. (chmod +x file.sh)
Invoke it directly without any sh or .
(./myscript.sh)
Hope this helps someone with similar question or problem.
To write a script that is secure to be run as either a shell script or sourced as an rc file, the script can check and compare $0 and $BASH_SOURCE and determine if exit can be safely used.
Here is a short code snippet for that
[ "X$(basename $0)" = "X$(basename $BASH_SOURCE)" ] && \
echo "***** executing $name_src as a shell script *****" || \
echo "..... sourcing $name_src ....."
I think that this happens because you are running it on source mode
with the dot
. myscript.sh
You should run that in a subshell:
/full/path/to/script/myscript.sh
'source' http://ss64.com/bash/source.html
It's correct that sourced vs. executed scripts use return vs. exit to keep the same session open, as others have noted.
Here's a related tip, if you ever want a script that should keep the session open, regardless of whether or not it's sourced.
The following example can be run directly like foo.sh or sourced like . foo.sh/source foo.sh. Either way it will keep the session open after "exiting". The $# string is passed so that the function has access to the outer script's arguments.
#!/bin/sh
foo(){
read -p "Would you like to XYZ? (Y/N): " response;
[ $response != 'y' ] && return 1;
echo "XYZ complete (args $#).";
return 0;
echo "This line will never execute.";
}
foo "$#";
Terminal result:
$ foo.sh
$ Would you like to XYZ? (Y/N): n
$ . foo.sh
$ Would you like to XYZ? (Y/N): n
$ |
(terminal window stays open and accepts additional input)
This can be useful for quickly testing script changes in a single terminal while keeping a bunch of scrap code underneath the main exit/return while you work. It could also make code more portable in a sense (if you have tons of scripts that may or may not be called in different ways), though it's much less clunky to just use return and exit where appropriate.
Also make sure to return with expected return value. Else if you use exit when you will encounter an exit it will exit from your base shell since source does not create another process (instance).
Improved the answer of Tzunghsing, with more clear results and error re-direction, for silent usage:
#!/usr/bin/env bash
echo -e "Testing..."
if [ "X$(basename $0 2>/dev/null)" = "X$(basename $BASH_SOURCE)" ]; then
echo "***** You are Executing $0 in a sub-shell."
exit 0
else
echo "..... You are Sourcing $BASH_SOURCE in this terminal shell."
return 0
fi
echo "This should never be seen!"
Or if you want to put this into a silent function:
function sExit() {
# Safe Exit from script, not closing shell.
[ "X$(basename $0 2>/dev/null)" = "X$(basename $BASH_SOURCE)" ] && exit 0 || return 0
}
...
# ..it have to be called with an error check, like this:
sExit && return 0
echo "This should never be seen!"
Please note that:
if you have enabled errexit in your script (set -e) and you return N with N != 0, your entire script will exit instantly. To see all your shell settings, use, set -o.
when used in a function, the 1st return 0 is exiting the function, and the 2nd return 0 is exiting the script.
if your terminal emulator doesn't have -hold you can sanitize a sourced script and hold the terminal with:
#!/bin/sh
sed "s/exit/return/g" script >/tmp/script
. /tmp/script
read
otherwise you can use $TERM -hold -e script
If a command succeeded successfully, the return value will be 0. We can check its return value afterwards.
Is there a “goto” statement in bash?
Here is some dirty workaround using trap which jumps only backwards.
#!/bin/bash
set -eu
trap 'echo "E: failed with exitcode $?" 1>&2' ERR
my_function () {
if git rev-parse --is-inside-work-tree > /dev/null 2>&1; then
echo "this is run"
return 0
else
echo "fatal: not a git repository (or any of the parent directories): .git"
goto trap 2> /dev/null
fi
}
my_function
echo "Command succeeded" # If my_function failed this line is not printed
Related:
https://stackoverflow.com/a/19091823/2402577
How to use $? and test to check function?
I couldn't find solution so for those who want to leave the nested script without leaving terminal window:
# this is just script which goes to directory if path satisfies regex
wpr(){
leave=false
pwd=$(pwd)
if [[ "$pwd" =~ ddev.*web ]]; then
# echo "your in wordpress instalation"
wpDir=$(echo "$pwd" | grep -o '.*\/web')
cd $wpDir
return
fi
echo 'please be in wordpress directory'
# to leave from outside the scope
leave=true
return
}
wpt(){
# nested function which returns $leave variable
wpr
# interupts the script if $leave is true
if $leave; then
return;
fi
echo 'here is the rest of the script, executes if leave is not defined'
}
I have no idea whether this is useful for you or not, but in zsh, you can exit a script, but only to the prompt if there is one, by using parameter expansion on a variable that does not exist, as follows.
${missing_variable_ejector:?}
Though this does create an error message in your script, you can prevent it with something like the following.
{ ${missing_variable_ejector:?} } 2>/dev/null
1) exit 0 will come out of the script if it is successful.
2) exit 1 will come out of the script if it is a failure.
You can try these above two based on ur req.

Bash - error message 'Syntax error: "(" unexpected'

For some reason, this function is working properly. The terminal is outputting
newbootstrap.sh: 2: Syntax error: "(" unexpected
Here is my code (line 2 is function MoveToTarget() {)
#!/bin/bash
function MoveToTarget() {
# This takes two arguments: source and target
cp -r -f "$1" "$2"
rm -r -f "$1"
}
function WaitForProcessToEnd() {
# This takes one argument. The PID to wait for
# Unlike the AutoIt version, this sleeps for one second
while [ $(kill -0 "$1") ]; do
sleep 1
done
}
function RunApplication() {
# This takes one application, the path to the thing to execute
exec "$1"
}
# Our main code block
pid="$1"
SourcePath="$2"
DestPath="$3"
ToExecute="$4"
WaitForProcessToEnd $pid
MoveToTarget $SourcePath, $DestPath
RunApplication $ToExecute
exit
You're using the wrong syntax to declare functions. Use this instead:
MoveToTarget() {
# Function
}
Or this:
function MoveToTarget {
# function
}
But not both.
Also, I see that later on you use commas to separate arguments (MoveToTarget $SourcePath, $DestPath). That is also a problem. Bash uses spaces to separate arguments, not commas. Remove the comma and you should be golden.
I'm also new to defining functions in Bash scripts. I'm using a Bash of version 4.3.11(1):-release (x86_64-pc-linux-gnu) on Ubuntu 14.04 (Trusty Tahr).
I don't know why, but the definition that starts with the keyword function never works for me.
A definition like the following
function check_and_start {
echo Hello
}
produces the error message:
Syntax error: "}" unexpected
If I put the { on a new line like:
function my_function
{
echo Hello.
}
It prints a Hello. when I run the script, even if I don't call this function at all, which is also what we want.
I don't know why this wouldn't work, because I also looked at many tutorials and they all put the open curly brace at the end of the first line. Maybe it's the version of Bash that we use?? Anyway, just put it here for your information.
I have to use the C-style function definition:
check_and_start() {
echo $1
}
check_and_start World!
check_and_start Hello,\ World!
and it works as expected.
If you encounter "Syntax error: "(" unexpected", then use "bash" instead of using "sh".
For example:
bash install.sh
I had the same issue. I was running scripts on Ubuntu sometimes using sh vs. Dash. It seems running scripts with sh causes the issue, but running scripts with Dash works fine.

Resources