"\r" is ignored in TCL Expect in Cygwin - cygwin

I have a tcl script to access serial port through Expect in Cygwin. I noticed the \r is just ignored causing serial console no reply.
spawn ./plink.exe -serial COM$priuart -sercfg 115200,8,n,1,N
set id $spawn_id
set timeout 30
log_user 1
exp_send -i $id "\r"
expect -i $id -re ".*>" {exp_send -i $id "sys rev\r"}
expect -i $id -re ".*>" {set temp $expect_out(buffer)
Please note that the similar issue was resolved in Cygwin by adding -o igncr. However, calling tcl script the issue existed still.
Any thoughts?

The exact thing you need to exp_send to simulate the pressing of the Return key can vary; the \r (a carriage return) is the right thing for classic Unix systems, but might not be correct in your case, especially since you're ending up talking to a serial line (which can add its own layer of complexity). It's entirely possible that you'll need to send either \n (a newline character) or \r\n (carriage-return/newline sequence) instead. The simplest thing is for you to experiment and see what works.
Don't forget when changing things to change it in all the places where it is used.
Also be aware that Tcl can talk to serial lines directly, and spawn told to use an already opened channel. This might work better for you…
# The name *is* magical, especially for larger port numbers
set channel [open \\\\.\\com$priuart r+]
fconfigure -mode 115200,n,8,1 -buffering none
spawn -open $channel

Related

Tcl expects output from tclsh to shell script

I'm searching for last few days for solution to my problem so finally I came here.
I got to write a Tcl script that should be opened from a Linux/Debian system. The problem is, that the script should log into a Cisco router and then go into the special tclsh mode, where it should get the SNMP value which represents txload. After a moment I get the value in tclsh mode (which supports SNMP), but I can't get it back to my script. It still exists only in router tclsh mode. My script should change some other values depending on the value I already have in tclsh, but tclsh is not needed anymore.
So here's my question: How can I get the $ifnum value from tclsh but I could still use in that script for some loops, because I need to make some loops where the $ifnum will be the condition?
As you can see, the script is not finished yet, but as I said, I'm stuck here and trying to figure it out:
#!/usr/bin/expect -f
set gateway [lindex $argv 0]
set serial [lindex $argv 1]
set timeout 5
spawn telnet $gateway
expect "Password:"
send "cisco\r"
expect "*>"
send "en\r"
expect "Password:"
send "cisco\r"
expect "*#"
send "set ifnumstr \[snmp_getone odczyt 1.3.6.1.4.1.9.2.2.1.1.24.6\]\r"
expect "*}"
send "regexp \{val='(\[0-9\]*)'\} \$ifnumstr \{\} ifnum \r"
expect "*#"
send "puts \$ifnum \r"
expect "*#"
I'd make it print out the thing you're looking for between markers, and then use expect in its RE matching mode to grab the value.
send "puts \"if\\>\$ifnum<if\"\r"
expect -re {if>(.*?)<if}
# Save the value in a variable in the *local* script, not the remote one
set ifnum $expect_out(1,string)

Passing variable to `expect` in bash array

I am trying to use a FOR loop to iterate over IP addresses (in a bash array), logs in, runs a script and then exits. The array is called ${INSTANCE_IPS[#]}. The following code doesn't work though, as expect doesn't seem to be able to accept the variable $instance.
for instance in ${INSTANCE_IPS[#]}
do
echo $instance
/usr/bin/expect -c '
spawn ssh root#$instance;
expect "?assword: ";
send "<password>\r";
expect "# ";
send ". /usr/local/bin/bootstrap.sh\r";
expect "# ";
send "exit\r" '
done
However, expect complains with:
can't read "instance": no such variable
while executing
"spawn ssh root#$instance"
There is another question on stackoverflow located here, that uses environmental variables to achieve this, however it doesn't allow me to iterate through different IP addresses like I can in an array.
Any help is appreciated.
Cheers
The problem is with quoting. Single quotes surrounding the whole block don't let Bash expand variables ($instance).
You need to switch to double quotes. But then, double quotes inside double quotes are not allowed (unless you escape them), so we are better off using single quotes with expect strings.
Try instead:
for instance in ${INSTANCE_IPS[#]}
do
echo $instance
/usr/bin/expect -c "
spawn ssh root#$instance;
expect '?assword: ';
send '<password>\r';
expect '# ';
send '. /usr/local/bin/bootstrap.sh\r';
expect '# ';
send 'exit\r' "
done
for instance in ${INSTANCE_IPS[&]} ; do
echo $instance
/usr/bin/expect -c '
spawn ssh root#'$instance' "/usr/local/bin/bootstrap.sh"
expect "password:"
send "<password>\r"
expect eof'
done
From the ssh man page:
If command is specified, it is executed on the remote host instead of a login shell.
Specifying a command means expect doesn't have to wait for # to execute your program, then wait for another # just to send the command exit. Instead, when you specify a command to ssh, it executes that command; it exits when done; and then ssh automatically closes the connection.
Alternately, put the value in the environment and expect can find it there
for instance in ${INSTANCE_IPS[&]} ; do
echo $instance
the_host=$instance /usr/bin/expect -c '
spawn ssh root#$env(the_host) ...
Old thread, and one of many, but I've been working on expect for several days. For anyone who comes across this, I belive I've found a doable solution to the problem of passing bash variables inside an expect -c script:
#!/usr/bin/env bash
password="TopSecret"
read -d '' exp << EOF
set user "John Doe"
puts "\$user"
puts "$password"
EOF
expect -c "$exp"
Please note that escaping quotations are typically a cited issue (as #Roberto Reale stated above), which I've solved using a heredoc EOF method, before passing the bash-variable-evaluated string to expect -c. In contrast to escaping quotes, all native expect variables will need to be escaped with \$ (I'm not here to solve all first-world problems--my afternoon schedule is slightly crammed), but this should greatly simplify the problem with little effort. Let me know if you find any issues with this proof of concept.
tl;tr: Been creating an [expect] daemon script with user authentication and just figured this out after I spent a whole day creating separated bash/expect scripts, encrypting my prompted password (via bash) with a different /dev/random salt each iteration, saving the encrypted password to a temp file and passing the salt to the expect script (highly discouraging anyone from easily discovering the password via ps, but not preventative since the expect script could be replaced). Now I should be able to effectively keep it in memory instead.

bash script read pipe or argument

I want my script to read a string either from stdin , if it's piped, or from an argument. So first i want to check if some text is piped and if not it should use an argument as input. My code looks something like this:
value=$(cat) # read from stdin
if [ "$pipe" != "" ]; then #check if pipe is not empty
#Do something with pipe string
else
#Do something with argument string
fi
The problem is when it's not piped, then the script will halt and wait for "ctrl d" and i dont want that. Any suggestions on how to solve this?
Thanks in advance.
/Tomas
What about checking the argument first?
if (($#)) ; then
process "$1"
else
cat | process
fi
Or, just take advantage from the same behaviour of cat:
cat "$#" | process
If you only need to know if it's a pipe or a redirection, it should be sufficient to determine if stdin is a terminal or not:
if [ -t 0 ]; then
# stdin is a tty: process command line
else
# stdin is not a tty: process standard input
fi
[ (aka test) with -t is equivalent to the libc isatty() function.
The above will work with both something | myscript and myscript < infile. This is the simplest solution, assuming your script is for interactive use.
The [ command is a builtin in bash and some other shells, and since [/test with -tis in POSIX, it's portable too (not relying on Linux, bash, or GNU utility features).
There's one edge case, test -t also returns false if the file descriptor is invalid, but it would take some slight adversity to arrange that. test -e will detect this, though assuming you have a filename such as /dev/stdin to use.
The POSIX tty command can also be used, and handles the adversity above. It will print the tty device name and return 0 if stdin is a terminal, and will print "not a tty" and return 1 in any other case:
if tty >/dev/null ; then
# stdin is a tty: process command line
else
# stdin is not a tty: process standard input
fi
(with GNU tty, you can use tty -s for silent operation)
A less portable way, though certainly acceptable on a typical Linux, is to use GNU stat with its %F format specifier, this returns the text "character special file", "fifo" and "regular file" in the cases of terminal, pipe and redirection respectively. stat requires a filename, so you must provide a specially-named file of the form /dev/stdin, /dev/fd/0, or /proc/self/fd/0, and use -L to chase symlinks:
stat -L -c "%F" /dev/stdin
This is probably the best way to handle non-interactive use (since you can't make assumptions about terminals then), or to detect an actual pipe (FIFO) distinct from redirection.
There is a slight gotcha with %F in that you cannot use it to tell the difference between a terminal and certain other device files, for example /dev/zero or /dev/null which are also "character special files" and might reasonably appear. An unpretty solution is to use %t to report the underlying device type (major, in hex), assuming you know what the underlying tty device number ranges are... and that depends on whether you're using BSD style ptys or Unix98 ptys, or whether you're on the actual console, among other things. In the simple case %t will be 0 though for a pipe or a redirection of a normal (non-special) file.
More general solutions to this kind of problem are to use bash's read with a timeout (read -t 0 ...) or non-blocking I/O with GNU dd (dd iflag=nonblock).
The latter will allow you to detect lack of input on stdin, dd will return an exit code of 1 if there is nothing ready to read. However, these are more suitable for non-blocking polling loops, rather than a once-off check: there is a race condition when you start two or more processes in a pipeline as one may be ready to read before another has written.
It's easier to check for command line arguments first and fallback to stdin if no arguments. Shell Parameter Expansion is a nice shorthand instead of the if-else:
value=${*:-`cat`}
# do something with $value

How can EXPECT interpret an escaped character to a command character

I'd like to be able to pass in a long command to expect. It's a multiple command somehow. First here's my expect script
#!/usr/bin/expect -f
set timeout -1
spawn telnet xxx.xxx.xxx.xxx
expect "*?username:*"
send "someusername\r"
expect "*?assword:*"
send "somepassword\r"
# Here's the command I'd like to pass from the command prompt
set command [lindex $argv 0]
send "$command\r"
send "exit\r"
I would then run this script as so:
./expectscript "mkdir /usr/local/dir1\ncd /usr/local/dir1\ntouch testfile"
Notice that I put "\n" to initiate an enter as though I'm processing the command before moving to the next.
I know you could separate the commands with ";", but for this particular exercise, I'd like to be able have expect interpret the "\n" with a "\r" so that, expect would behave as though it were like this:
send "mkdir /usr/local/dir1\r"
send "cd /usr/local/dir1\r"
send "touch testfile\r"
The question then becomes how can expect interpret the "\n" to be "\r"? I've tried putting the "\r" in the argument instead of "\n", but that doesn't work.
Thanks for the input.
When I do a simple experiment, I find that the \n in the argument is not converted by my shell (bash) into a newline; it remains a literal. You can check this out for yourself by just using puts to print out the command line argument, like this:
puts [lindex $argv 0]
Working around this requires a little bit of work to split things. Alas, Tcl's split command does not split on multi-character sequences (it splits on many different characters at once instead) so we'll need a different approach. However, Tcllib has exactly what we need: the splitx command. With that, we do this (based on #tensaix2j's answer):
#!/usr/bin/expect -f
package require Expect;   # Good practice to put this explicitly
package require textutil::split; # Part of Tcllib
# ... add your stuff here ...
foreach line [textutil::split::splitx [lindex $argv 0] {\\n}] {
send "$line\r"
# Wait for response and/or prompt?
}
# ... add your stuff here ...
If you don't have Tcllib installed and configured for use with Expect, you can also snarf the code for splitx directly out of the code (find it online here) as long as you internally acknowledge the license it's under (standard Tcl licensing rules).
foreach cmd [ split $command \n ] {
send "$cmd\r\n"
}

Is it possible to make stdout and stderr output be of different colors in XTerm or Konsole?

Is it even achievable?
I would like the output from a command’s stderr to be rendered in a different color than stdout (for example, in red).
I need such a modification to work with the Bash shell in the Konsole, XTerm, or GNOME Terminal terminal emulators on Linux.
Here's a solution that combines some of the good ideas already presented.
Create a function in a bash script:
color() ( set -o pipefail; "$#" 2>&1>&3 | sed $'s,.*,\e[31m&\e[m,' >&2 ) 3>&1
Use it like this:
$ color command -program -args
It will show the command's stderr in red.
Keep reading for an explanation of how it works. There are some interesting features demonstrated by this command.
color()... — Creates a bash function called color.
set -o pipefail — This is a shell option that preserves the error return code of a command whose output is piped into another command. This is done in a subshell, which is created by the parentheses, so as not to change the pipefail option in the outer shell.
"$#" — Executes the arguments to the function as a new command. "$#" is equivalent to "$1" "$2" ...
2>&1 — Redirects the stderr of the command to stdout so that it becomes sed's stdin.
>&3 — Shorthand for 1>&3, this redirects stdout to a new temporary file descriptor 3. 3 gets routed back into stdout later.
sed ... — Because of the redirects above, sed's stdin is the stderr of the executed command. Its function is to surround each line with color codes.
$'...' A bash construct that causes it to understand backslash-escaped characters
.* — Matches the entire line.
\e[31m — The ANSI escape sequence that causes the following characters to be red
& — The sed replace character that expands to the entire matched string (the entire line in this case).
\e[m — The ANSI escape sequence that resets the color.
>&2 — Shorthand for 1>&2, this redirects sed's stdout to stderr.
3>&1 — Redirects the temporary file descriptor 3 back into stdout.
Here's an extension of the same concept that also makes STDOUT green:
function stdred() (
set -o pipefail;
(
"$#" 2>&1>&3 |
sed $'s,.*,\e[31m&\e[m,' >&2
) 3>&1 |
sed $'s,.*,\e[32m&\e[m,'
)
You can also check out stderred: https://github.com/sickill/stderred
I can't see that there is any way for the terminal emulator to do this.
The interface between the terminal emulator and the shell/app is via a pseudo-tty, where the terminal emulator is on the master side and the shell/app on the other. The shell/app have both stdout and stderr connected to the same pty, so when the terminal emulator reads from the pty for the shell/app output it can no longer tell which was written to stdout and which to stderr.
You will have to use one of the solutions that intercepts the data between the application and the slave-pty and inserts escape codes to control the terminal output colo(u)r.
Here is a little Awk script that will print everything you pass it in red.
#!/usr/bin/awk -f
{ printf("%c[%dm%s%c[0m\n", 0x1B, 31, $0, 0x1B); fflush() }
It simply prints each line it receives on stdin within the necessary escape codes to display it in red. It is followed by an escape code to reset the terminal.
(If you need a different color, change the second argument in the above printf call from 31 to the number corresponding to the desired color.)
Save it to colr.awk, do a chmod a+x, and use it like so:
$ my_program | ./colr.awk
It has the drawback that lines may not be displayed in order, because stderr goes directly to the console, while stdout is piped through an additional process.
A simple solution to color stdout in red is to pipe it through grep:
program | grep .
This should not require installing anything, as grep should be already installed everywhere.
Taken from Dennis’s comment on superuser.com.
I think you should use the standard escape sequences on stderr. Have a look at this.
Hilite will do this. It's a lightweight solution, but you have to invoke it for each command, eg. hilite gcc myprog.c. A more radical approach is built in to my experimental shell Gush which shows stderr from all commands run in red, stdout in black. Either way is very useful for software builds where you have lots of output with a few error messages that could easily be missed if not highlighted.

Resources