dynamically providing default input to calling script from a shell script - linux

I have a shell-script main_script.sh, which will in term call 3 other scripts (a1.sh,a2.sh,a3.sh).
Now on execution of a1.sh/a2.sh/a3.sh I need to answer some verbose with Y/N.
I know that each of a1.sh/a2.sh/a3.sh will need 2 Y/N.
How can I implement main_script.sh,so I don't have to answer Y/N requests while execution?

It depends on how the scripts are written. You mentioned each script needing two Y. I presume that each Y will need to be followed by an enter key (newline). In this case, it could be as simple as using the following for main_script.sh:
#!/bin/bash
echo $'Y\nY\n' | bash a1.sh
echo $'Y\nY\n' | bash a2.sh
echo $'Y\nY\n' | bash a3.sh
Above, the echo command sends two Y and newline characters to each of the scripts. You can adjust this as needed.
Some scripts will insist that interactive input come from the terminal, not stdin. Such scripts are harder, but not impossible, to fool. For them, use expect/pexpect.
More details
Let's look in more detail at one of those commands:
echo $'Y\nY\n' | bash a1.sh
The | is the pipe symbol. It connects the standard out of the echo command to the standard input of the a1.sh command. If a1.sh is amenable, this may allow us to pre-answer any and all questions that a1.sh asks.
In this case, the output from the echo command is $'Y\nY\n'. This is a bash shell syntax meaning Y, followed by newline character, denoted by \n, followed by Y, followed by newline, \n. A newline character is the same thing that the enter or return key generates.
Using expect
If your script does not accept input on stdin, then expect can be used to automate the interaction with script. As an example:
#!/bin/sh
expect <<EOF1
spawn a1.sh
expect "?"
send "Y\r"
expect "?"
send "Y\r"
sleep 1
EOF1
expect <<EOF2
spawn a2.sh
expect "?"
send "Y\r"
expect "?"
send "Y\r"
sleep 1
EOF2

Related

Unix: What does cat by itself do?

I saw the line data=$(cat) in a bash script (just declaring an empty variable) and am mystified as to what that could possibly do.
I read the man pages, but it doesn't have an example or explanation of this. Does this capture stdin or something? Any documentation on this?
EDIT: Specifically how the heck does doing data=$(cat) allow for it to run this hook script?
#!/bin/bash
# Runs all executable pre-commit-* hooks and exits after,
# if any of them was not successful.
#
# Based on
# http://osdir.com/ml/git/2009-01/msg00308.html
data=$(cat)
exitcodes=()
hookname=`basename $0`
# Run each hook, passing through STDIN and storing the exit code.
# We don't want to bail at the first failure, as the user might
# then bypass the hooks without knowing about additional issues.
for hook in $GIT_DIR/hooks/$hookname-*; do
test -x "$hook" || continue
echo "$data" | "$hook"
exitcodes+=($?)
done
https://github.com/henrik/dotfiles/blob/master/git_template/hooks/pre-commit
cat will catenate its input to its output.
In the context of the variable capture you posted, the effect is to assign the statement's (or containing script's) standard input to the variable.
The command substitution $(command) will return the command's output; the assignment will assign the substituted string to the variable; and in the absence of a file name argument, cat will read and print standard input.
The Git hook script you found this in captures the commit data from standard input so that it can be repeatedly piped to each hook script separately. You only get one copy of standard input, so if you need it multiple times, you need to capture it somehow. (I would use a temporary file, and quote all file name variables properly; but keeping the data in a variable is certainly okay, especially if you only expect fairly small amounts of input.)
Doing:
t#t:~# temp=$(cat)
hello how
are you?
t#t:~# echo $temp
hello how are you?
(A single Controld on the line by itself following "are you?" terminates the input.)
As manual says
cat - concatenate files and print on the standard output
Also
cat Copy standard input to standard output.
here, cat will concatenate your STDIN into a single string and assign it to variable temp.
Say your bash script script.sh is:
#!/bin/bash
data=$(cat)
Then, the following commands will store the string STR in the variable data:
echo STR | bash script.sh
bash script.sh < <(echo STR)
bash script.sh <<< STR

Making the script issue "enter" after executing command in shell script [duplicate]

I've created a really simple bash script that runs a few commands.
one of these commands needs user input during runtime. i.e it asks the user "do you want to blah blah blah?", I want to simply send an enter keypress to this so that the script will be completely automated.
I won't have to wait for the input or anything during runtime, its enough to just send the keypress and the input buffer will handle the rest.
echo -ne '\n' | <yourfinecommandhere>
or taking advantage of the implicit newline that echo generates (thanks Marcin)
echo | <yourfinecommandhere>
You can just use yes.
# yes "" | someCommand
You might find the yes command useful.
See man yes
Here is sample usage using expect:
#!/usr/bin/expect
set timeout 360
spawn my_command # Replace with your command.
expect "Do you want to continue?" { send "\r" }
Check: man expect for further information.
You could make use of expect (man expect comes with examples).
I know this is old but hopefully, someone will find this helpful.
If you have multiple user inputs that need to be handled you can use process substitution and use echo as a 'file' for cat with whatever is needed to handle the first input like this:
# cat ignores stdin if it has a file to look at
cat <(echo "selection here") | command
and then you can handle subsequent inputs by piping the yes command with the answer:
cat <(echo "selection here") | yes 'y' | command

Passing variable to `expect` in bash array

I am trying to use a FOR loop to iterate over IP addresses (in a bash array), logs in, runs a script and then exits. The array is called ${INSTANCE_IPS[#]}. The following code doesn't work though, as expect doesn't seem to be able to accept the variable $instance.
for instance in ${INSTANCE_IPS[#]}
do
echo $instance
/usr/bin/expect -c '
spawn ssh root#$instance;
expect "?assword: ";
send "<password>\r";
expect "# ";
send ". /usr/local/bin/bootstrap.sh\r";
expect "# ";
send "exit\r" '
done
However, expect complains with:
can't read "instance": no such variable
while executing
"spawn ssh root#$instance"
There is another question on stackoverflow located here, that uses environmental variables to achieve this, however it doesn't allow me to iterate through different IP addresses like I can in an array.
Any help is appreciated.
Cheers
The problem is with quoting. Single quotes surrounding the whole block don't let Bash expand variables ($instance).
You need to switch to double quotes. But then, double quotes inside double quotes are not allowed (unless you escape them), so we are better off using single quotes with expect strings.
Try instead:
for instance in ${INSTANCE_IPS[#]}
do
echo $instance
/usr/bin/expect -c "
spawn ssh root#$instance;
expect '?assword: ';
send '<password>\r';
expect '# ';
send '. /usr/local/bin/bootstrap.sh\r';
expect '# ';
send 'exit\r' "
done
for instance in ${INSTANCE_IPS[&]} ; do
echo $instance
/usr/bin/expect -c '
spawn ssh root#'$instance' "/usr/local/bin/bootstrap.sh"
expect "password:"
send "<password>\r"
expect eof'
done
From the ssh man page:
If command is specified, it is executed on the remote host instead of a login shell.
Specifying a command means expect doesn't have to wait for # to execute your program, then wait for another # just to send the command exit. Instead, when you specify a command to ssh, it executes that command; it exits when done; and then ssh automatically closes the connection.
Alternately, put the value in the environment and expect can find it there
for instance in ${INSTANCE_IPS[&]} ; do
echo $instance
the_host=$instance /usr/bin/expect -c '
spawn ssh root#$env(the_host) ...
Old thread, and one of many, but I've been working on expect for several days. For anyone who comes across this, I belive I've found a doable solution to the problem of passing bash variables inside an expect -c script:
#!/usr/bin/env bash
password="TopSecret"
read -d '' exp << EOF
set user "John Doe"
puts "\$user"
puts "$password"
EOF
expect -c "$exp"
Please note that escaping quotations are typically a cited issue (as #Roberto Reale stated above), which I've solved using a heredoc EOF method, before passing the bash-variable-evaluated string to expect -c. In contrast to escaping quotes, all native expect variables will need to be escaped with \$ (I'm not here to solve all first-world problems--my afternoon schedule is slightly crammed), but this should greatly simplify the problem with little effort. Let me know if you find any issues with this proof of concept.
tl;tr: Been creating an [expect] daemon script with user authentication and just figured this out after I spent a whole day creating separated bash/expect scripts, encrypting my prompted password (via bash) with a different /dev/random salt each iteration, saving the encrypted password to a temp file and passing the salt to the expect script (highly discouraging anyone from easily discovering the password via ps, but not preventative since the expect script could be replaced). Now I should be able to effectively keep it in memory instead.

How can EXPECT interpret an escaped character to a command character

I'd like to be able to pass in a long command to expect. It's a multiple command somehow. First here's my expect script
#!/usr/bin/expect -f
set timeout -1
spawn telnet xxx.xxx.xxx.xxx
expect "*?username:*"
send "someusername\r"
expect "*?assword:*"
send "somepassword\r"
# Here's the command I'd like to pass from the command prompt
set command [lindex $argv 0]
send "$command\r"
send "exit\r"
I would then run this script as so:
./expectscript "mkdir /usr/local/dir1\ncd /usr/local/dir1\ntouch testfile"
Notice that I put "\n" to initiate an enter as though I'm processing the command before moving to the next.
I know you could separate the commands with ";", but for this particular exercise, I'd like to be able have expect interpret the "\n" with a "\r" so that, expect would behave as though it were like this:
send "mkdir /usr/local/dir1\r"
send "cd /usr/local/dir1\r"
send "touch testfile\r"
The question then becomes how can expect interpret the "\n" to be "\r"? I've tried putting the "\r" in the argument instead of "\n", but that doesn't work.
Thanks for the input.
When I do a simple experiment, I find that the \n in the argument is not converted by my shell (bash) into a newline; it remains a literal. You can check this out for yourself by just using puts to print out the command line argument, like this:
puts [lindex $argv 0]
Working around this requires a little bit of work to split things. Alas, Tcl's split command does not split on multi-character sequences (it splits on many different characters at once instead) so we'll need a different approach. However, Tcllib has exactly what we need: the splitx command. With that, we do this (based on #tensaix2j's answer):
#!/usr/bin/expect -f
package require Expect;   # Good practice to put this explicitly
package require textutil::split; # Part of Tcllib
# ... add your stuff here ...
foreach line [textutil::split::splitx [lindex $argv 0] {\\n}] {
send "$line\r"
# Wait for response and/or prompt?
}
# ... add your stuff here ...
If you don't have Tcllib installed and configured for use with Expect, you can also snarf the code for splitx directly out of the code (find it online here) as long as you internally acknowledge the license it's under (standard Tcl licensing rules).
foreach cmd [ split $command \n ] {
send "$cmd\r\n"
}

Is it possible to make stdout and stderr output be of different colors in XTerm or Konsole?

Is it even achievable?
I would like the output from a command’s stderr to be rendered in a different color than stdout (for example, in red).
I need such a modification to work with the Bash shell in the Konsole, XTerm, or GNOME Terminal terminal emulators on Linux.
Here's a solution that combines some of the good ideas already presented.
Create a function in a bash script:
color() ( set -o pipefail; "$#" 2>&1>&3 | sed $'s,.*,\e[31m&\e[m,' >&2 ) 3>&1
Use it like this:
$ color command -program -args
It will show the command's stderr in red.
Keep reading for an explanation of how it works. There are some interesting features demonstrated by this command.
color()... — Creates a bash function called color.
set -o pipefail — This is a shell option that preserves the error return code of a command whose output is piped into another command. This is done in a subshell, which is created by the parentheses, so as not to change the pipefail option in the outer shell.
"$#" — Executes the arguments to the function as a new command. "$#" is equivalent to "$1" "$2" ...
2>&1 — Redirects the stderr of the command to stdout so that it becomes sed's stdin.
>&3 — Shorthand for 1>&3, this redirects stdout to a new temporary file descriptor 3. 3 gets routed back into stdout later.
sed ... — Because of the redirects above, sed's stdin is the stderr of the executed command. Its function is to surround each line with color codes.
$'...' A bash construct that causes it to understand backslash-escaped characters
.* — Matches the entire line.
\e[31m — The ANSI escape sequence that causes the following characters to be red
& — The sed replace character that expands to the entire matched string (the entire line in this case).
\e[m — The ANSI escape sequence that resets the color.
>&2 — Shorthand for 1>&2, this redirects sed's stdout to stderr.
3>&1 — Redirects the temporary file descriptor 3 back into stdout.
Here's an extension of the same concept that also makes STDOUT green:
function stdred() (
set -o pipefail;
(
"$#" 2>&1>&3 |
sed $'s,.*,\e[31m&\e[m,' >&2
) 3>&1 |
sed $'s,.*,\e[32m&\e[m,'
)
You can also check out stderred: https://github.com/sickill/stderred
I can't see that there is any way for the terminal emulator to do this.
The interface between the terminal emulator and the shell/app is via a pseudo-tty, where the terminal emulator is on the master side and the shell/app on the other. The shell/app have both stdout and stderr connected to the same pty, so when the terminal emulator reads from the pty for the shell/app output it can no longer tell which was written to stdout and which to stderr.
You will have to use one of the solutions that intercepts the data between the application and the slave-pty and inserts escape codes to control the terminal output colo(u)r.
Here is a little Awk script that will print everything you pass it in red.
#!/usr/bin/awk -f
{ printf("%c[%dm%s%c[0m\n", 0x1B, 31, $0, 0x1B); fflush() }
It simply prints each line it receives on stdin within the necessary escape codes to display it in red. It is followed by an escape code to reset the terminal.
(If you need a different color, change the second argument in the above printf call from 31 to the number corresponding to the desired color.)
Save it to colr.awk, do a chmod a+x, and use it like so:
$ my_program | ./colr.awk
It has the drawback that lines may not be displayed in order, because stderr goes directly to the console, while stdout is piped through an additional process.
A simple solution to color stdout in red is to pipe it through grep:
program | grep .
This should not require installing anything, as grep should be already installed everywhere.
Taken from Dennis’s comment on superuser.com.
I think you should use the standard escape sequences on stderr. Have a look at this.
Hilite will do this. It's a lightweight solution, but you have to invoke it for each command, eg. hilite gcc myprog.c. A more radical approach is built in to my experimental shell Gush which shows stderr from all commands run in red, stdout in black. Either way is very useful for software builds where you have lots of output with a few error messages that could easily be missed if not highlighted.

Resources