Passing multiple variables from a Bash Script to an Expect Script - linux

I've been trying to get an expect/bash script that can read each line of a CSV file and pull both the hostname address and the password; as these are all different for each MikroTik I am trying to access.
I've recently sent an auto.rsc file to several thousand MikroTik routers that are being used as a residential solution. This file filled up the HDD (it had an IP scan which created a log that managed to do the deed.) This prevents me from sending additional auto.rsc files to purge the logs as there is no available room.
The solution I came up with was to use an expect script to login to these and delete the auto.log file. This was successful with my RSA script.
set timeout 3
set f [open "dynuList.txt"]
set dynu [split [read $f] "\n"]
close $f
foreach dynu $dynu {
spawn ssh -o "StrictHostKeyChecking no" -i mtk4.key admin+t#$dynu
expect {
"> " { send "\:do \{ file remove push.auto.log \} on-error\=\{ \[\] \}\r" }
"Connection refused" { catch {exp_close}; exp_wait; continue }
eof { exp_wait; continue }
}
expect ".*"
close
wait
}
The script I am having issues with is as follows:
n=`wc -l hostPasswordDynuList.csv | awk '{print$1}'`
i=1
while [ $i -le $n ]
do
host='awk -F "," 'NR==$i {print $1}' hostPasswordDynuList.csv'
password='awk -F "," 'NR==$i {print $2}' hostPasswordDynuList.csv'
./removeLogExpect.sh $host $password
i=`expr $i + 1`
done
Which should pass variables to this expect script
#!/usr/bin/bash/expect -f
set timeout 3
set host [lindex $argv 0]
set password [lindex $argv 1]
spawn ssh -o "StrictHostKeyChecking no" admin+t#$host
expect {
"password: " { send $password"\r" }
"Connection refused" { catch {exp_close}; exp_wait; continue }
eof { exp_wait; continue }
}
expect {
".*" { send "\:do \{ file remove push.auto.log \} on-error\=\{ \[\] \}\r" }
}
expect ".*"
close
wait
I was hoping that the script would be able to connect to then login to each MikroTik that didn't have RSA keys setup and then the command to clear out the auto.log file. As it stands the script doesn't seem to be passing the variables to the expect half whatsoever. Any help would be appreciated.

expect is an extension of the Tcl language, which is a fully featured programming language: it can read files and parse comma separated fields. There's no need for an inefficient shell script to invoke your expect program multiple times
#!/usr/bin/bash/expect -f
set timeout 3
set file hostPasswordDynuList.csv
set fh [open $file r]
while {[gets $fh line] != -1} {
lassign [split $line ,] host password
spawn ssh -o "StrictHostKeyChecking no" admin+t#$host
expect {
"password: " { send $password"\r" }
"Connection refused" {
catch {exp_close}
exp_wait
continue
}
eof {
exp_wait
continue
}
}
expect ".*"
send ":do { file remove push.auto.log } on-error={ \[\] }\r"
expect ".*"
exp_close
exp_wait
}
close $fh
See https://tcl.tk/man/tcl8.6/TclCmd/contents.htm for documentation on Tcl's builtin commands.
The line expect ".*" is probably not doing what you think it does: the default pattern matching style is glob, so .* looks for a literal dot followed by any number of characters. You might be thinking of the regular expression "any character zero or more times" for which you would need to add the -re option.
However, the key to robust expect code is to expect more specific patterns.

Related

Expect Script to Throw a Message if the SSH Fails

I am trying to run a Script to SSH to multiple Servers to run Few Commands and show results. I managed to write a Script to Skip and go to next server if SSH Fails. But i couldnt make it throw a Message or Comment if the SSH Fails. Someone , please help. Below is my Script.
#!/usr/bin/expect
set timeout -1
set hostlist [open ./ddlicall.txt]
set ipaddrs [read $hostlist]
foreach line [split $ipaddrs \n] {
spawn ssh -o ConnectTimeout=10 -o LogLevel=quiet sysadmin#$line
expect {
"assword:"
{
send "MY-PASSWORD\r"
expect {
"sysadmin#"
{
send "elicense license-server show\r"
expect "sysadmin#"
send "exit\r"
}
"assword:" {
send \x03
puts "\nIncorrect Password\n"
expect eof
}
}
}
"send:"
{
puts "\nSSH Issue\n"
# expect eof
}
}
}
I suspect all you need to do is change "send:" to eof. If ssh fails, then the spawned process exits and you can expect to see the eof pattern.
Another small issue I see: read will read the whole file, including the trailing newline at the end of the file. Then [split $ipaddrs \n] will create a list with an empty string as the last element. That empty $line value will cause an error for the last iteration of the loop.
To fix that, there are 2 choices:
set ipaddrs [read -nonewline $hostlist] or
foreach line [split [string trim $ipaddrs] \n] {

Cygwin issue with expect - spawn id exp64 not open

I have a problem, if I run a TCL/expect script from cygwin.
When I'm getting the 64th session, I got the following error(debug output):
spawn ssh useri#1.1.1.1
parent: waiting for sync byte
parent: telling child to go ahead
parent: now unsynchronized from child
spawn: returns {8972}
expect: does "" (spawn_id exp64) match glob pattern "unsuccessful login"? no
"Can't ping useri#1.1.1.1! Quitting."? no
"refused"? no
"Bad IP address"? no
"computer name"? no
"Usage of telnet deprecated, instead use"? no
"Invalid input detected at"? no
"timed out"? no
"o route to host"? no
"closed"? no
"ermission denied"? no
"estination unreachable"? no
"imeout expired"? no
"(NO/YES)"? no
"continue connecting"? no
"Username: useri"? no
"ser:"? no
"sername:"? no
"assword:"? no
expect: does "" (spawn_id exp64) match glob pattern "useri#server%"? no
"useri#server:*%"? no
"useri#server>"? no
"useri#server"? no
"server "? no
"server\r"? no
"server>"? no
"server"? no
expect: does "" (spawn_id exp64) match glob pattern "#asd"? no
expect: read eof
expect: set expect_out(spawn_id) "exp64"
expect: set expect_out(buffer) ""
exp_i_parse_states: : spawn id exp64 not open: spawn id exp64 not open
while executing
"expect {
# Special cases, switch to different login
-i $sid "unsuccessful login" { exp_continue }
-i $sid "Can't ping $Username#$Ho..."
(procedure "connecthor" line 5)
invoked from within
"connecthor $Servername($i) $Server($i) $Serveruser($i) $Serverpass($i) $Option $sid $Enablepass"
(procedure "dialin" line 44)
invoked from within
"dialin $Routername $L_user"
("-(checker)" arm line 31)
invoked from within
"switch -regexp -nocase -- [lindex $argv 0] \
-(initpass) {
puts "\r\rThis is the initial password manager module of to script."
puts "WARNIN..."
(file "/usr/bin/to" line 366)
The script is working fine, doing it's job until it reaches the session exp64. I've also tried to run the script under Ubuntu natively instead of Cygwin. It was working fine, without any error.
I'm experiencing this issue with other scripts as well.
I've already tried to re-install Cygwin, same issue appeared.
Do you maybe have an idea what could it be?
In the meantime:
Just wrote a minimal script, SSHing to a Cisco router, send a command, and exit out. It does the job in an endless loop, so I can log the $spawn_id in the meantime:
#!/usr/bin/expect
set i 1
while {1} {
spawn ssh username#1.1.1.1
set sid $spawn_id
expect "assword:"
send "Start123\r"
expect "Router"
send "show ip int brief\r"
while {1} {
expect {
"Router" {send "exit\r"}
eof {break}
}
}
puts "*** Loop is running: $i, spawn_id is: $sid ***"
incr i
}
I get the same error with this as well:
*** Loop is running: 60, spawn_id is: exp63 ***
spawn ssh username#1.1.1.1
send: spawn id exp64 not open
There's a limit on the number of simultaneous expect sessions that can be maintained. That limit varies by OS, and is (on some platforms at least) a global limit, not a process-local limit. Though since you're on Windows, there might also be per-process limits that you're hitting; I'm not entirely sure (though 64 sounds exactly like one of those). In any case, it sounds like you're hitting resource exhaustion.
Are you remembering to close once you're finished interacting with a particular host? If you've got lots of old expect sessions about in the process, you'll be leaking a pretty precious pool of resources.
The problem is that you did not wait the exited ssh process. Without the wait, an exited ssh process would be showed as defunct which is still consuming the PID and maybe some FDs. You can try like this:
while {1} {
expect {
"Router" {send "exit\r"}
-> eof { wait -nowait; break }
}
}
puts "*** Loop is running: $i, spawn_id is: $sid ***"
incr i
or
while {1} {
expect {
"Router" {send "exit\r"}
eof { break }
}
}
-> wait -nowait
puts "*** Loop is running: $i, spawn_id is: $sid ***"
incr i

How can I call an expect script with a sequence of arguments

I have an expect script that looks like:
#!/usr/bin/expect
set path_start [lindex $argv 0]
set host [lindex $argv 1]
spawn ssh root#$host telnet jpaxdp
expect {\-> }
set fh [open ${path_start}${host} r]
while {[gets $fh line] != -1} {
send "$line\r"
expect {\-> }
}
close $fh
send "exit\r"
expect eof
and I call it like ./script.sh cmds_ cc1, now my hosts are numbered 1 - 8 and I tried to call the script like ./script cmds_ cc[1-8] but that didn't work as the script interpreted host[1-8] as argument and showed me:
spawn ssh root#cc[1-8] telnet jpaxdp
ssh: Could not resolve hostname cc[1-8]: Name or service not known
couldn't open "cmds_cc[1-8]": no such file or directory
while executing
"open ${path_start}${host} r"
invoked from within
"set fh [open ${path_start}${host} r]"
(file "./script.sh" line 7)
How can I make this work?
cc[1-8] is a filename wildcard, it looks for files that match that pattern. If there aren't any, the wildcard itself is kept in the argument list. To get a range of numbers, use cc{1..8}. And to run the command repeatedly, you need a for loop.
for host in cc{1..8}
do
./script.sh cmds_ "$host"
done

TCL, expect: Multiple files w/ SCP

I was able to transfer files with scp and expect, now I tried to upload several files at once:
#!/usr/bin/expect -f
# Escapes spaces in a text
proc esc text {
return [regsub -all {\ } $text {\\&}]
}
# Uploads several files to a specified server
proc my_scp_multi {ACCOUNT SERVER PW files newfolder} {
set timeout 30
send_user -- "\n"
spawn scp $files $ACCOUNT#$SERVER:[esc $newfolder]
match_max 100000
# Look for password prompt
expect {
-re ".*Connection closed.*" {
sendError "\n\n\nUpload failed!\nPlease check the errors above and start over again.\nThis is most likely induced by too many wrong password-attempts and will last quite a time!"
}
-re ".*Permission denied.*" {
sendError "\n\n\nUpload failed!\nPlease check the errors above and start over again.\nYou entered most likely a wrong password!"
}
-re ".*Are.*.*yes.*no.*" {
send "yes\n"
exp_continue
#look for the password prompt
}
-re ".*sword.*" {
# Send password aka $PW
send -- "$PW\r"
# send blank line (\r) to make sure we get back to gui
send -- "\r\n"
exp_continue
}
send_user -- "Upload successful!\n"
}
set timeout -1
}
When I want to upload several files, the sh command is:
scp $a $b $c user#server:$folder, so I called my_scp_multi "ACCOUNT" "SERVER" "PW" "~/testfileA ~/testfileB ~/testfileC" "~/test/". Which also produces this output:
spawn scp ~/testfileA ~/testfileB ~/testfileC user#server:~/test/
user#server's password:
~/testfileA ~/testfileB ~/testfileC: No such file or directory
It seems to see "~/testfileA ~/testfileB ~/testfileC" as one file. But when I copy-paste scp ~/testfileA ~/testfileB ~/testfileC user#server:~/test/ to the console it works fine!
What am I doing wrong? I've tried "\"~/testfileA\" \"~/testfileB\" \"~/testfileC\"" and such things, but nothing did work at all.
Any ideas or suggestions?
EDITS
P.S.: I'm transferring rather small files. Building up a connection is the biggest part of the transfer. This is the reason I want it to be done in ONE scp.
P.P.S.:
I played around a little and came up with:
my_scp_multi3 "user" "server" "pw" "~/a\ b/testfileA, ~/a\\ b/testfileB, ~/a\\\ b/testfileC" "~/test"
with your first solution but {*}[split $files ","] and
my_scp_multi2 "user" "server" "pw" "~/a b/testfileA" "~/a\ b/testfileB" "~/a\\ b/testfileC" "~/test"
with your second solution. This prints:
~/a b/testfileA: No such file or directory
~/a\ b/testfileB: No such file or directory
~/a\ b/testfileC: No such file or directory
and
~/a b/testfileA: No such file or directory
~/a b/testfileB: No such file or directory
~/a\ b/testfileC: No such file or directory
(BTW: I of course moved the files :) )
Thanks to all the answers, here my Solution:
using \n \0 (nullbyte) as separator, because it is the only symbol except / and \ which may not be used in filenames.
#!/usr/bin/expect -f
# Escapes spaces in a text
proc esc text {
return [regsub -all {\ } $text {\\&}]
}
# Returns the absolute Filepath
proc makeAbsolute {pathname} {
file join [pwd] $pathname
}
proc addUploadFile {files f} {
if {$files != ""} {
set files "$files\0"
}
return "$files[makeAbsolute $f]"
}
#Counts all files from an upload-list
proc countUploadFiles {s} {
set rc [llength [split $s "\0"]]
incr rc -1
return $rc
}
# Uploads several files from a list (created by addUploadFile) to a specified server
proc my_scp_multi {ACCOUNT SERVER PW files newfolder} {
foreground blue
set nFiles [countUploadFiles $files]
set timeout [expr $nFiles * 60]
send_user -- "\n"
spawn scp -r {*}[split $files "\0"] $ACCOUNT#$SERVER:[esc $newfolder]
match_max 100000
# Look for password prompt
expect {
-re ".*Connection closed.*" {
sendError "\n\n\nUpload failed!\nPlease check the errors above and start over again.\nThis is most likely induced by too many wrong password-attempts and will last quite a time!"
}
-re ".*Permission denied.*" {
sendError "\n\n\nUpload failed!\nPlease check the errors above and start over again.\nYou entered most likely a wrong password!"
}
-re ".*Are.*.*yes.*no.*" {
send "yes\n"
exp_continue
#look for the password prompt
}
-re ".*sword.*" {
# Send password aka $PW
send -- "$PW\r"
# send blank line (\r) to make sure we get back to gui
send -- "\r\n"
exp_continue
}
send_user -- "Upload successful!\n"
}
set timeout -1
}
set fls [addUploadFile "" "a b/testfileA"]
set fls [addUploadFile $fls "a b/testfileB"]
set fls [addUploadFile $fls "a b/testfileC"]
my_scp_multi "user" "server" "pw" $fls "~/test"
You don't want to send the filenames as a single string. Either do this:
spawn scp {*}[split $files] $ACCOUNT#$SERVER:[esc $newfolder]
And continue to quote the filenames:
my_scp_multi "ACCOUNT" "SERVER" "PW" "~/testfileA ~/testfileB ~/testfileC" "~/test/"
or do this:
proc my_scp_multi {ACCOUNT SERVER PW args} {
set timeout 30
send_user -- "\n"
set files [lrange $args 0 end-1]
set newfolder [lindex $args end]
spawn scp {*}$files $ACCOUNT#$SERVER:[esc $newfolder]
And then do not quote the filenames
my_scp_multi "ACCOUNT" "SERVER" "PW" ~/testfileA ~/testfileB ~/testfileC "~/test/"
The splat ({*}) splits the list up into it's individual elements so the spawn command sees several words, not a single word. See http://tcl.tk/man/tcl8.5/TclCmd/Tcl.htm
You could spawn a shell and then run the scp command instead:
spawn bash
send "scp $files $ACCOUNT#$SERVER:[esc $newfolder]\r"
This allows for glob expansion but adds extra housekeeping as you will need to trap when the scp process is completed, as you still have a shell running.
You could add below to your expect block:
-re "100%" {
if { $index < $count } {
set index [expr $index + 1]
exp_continue
}
}
Where index is the # of file being transferred and count the nr of files.
You should be using SSH public key authentication instead of typing in the password with expect. When it's set up properly, scp will work without any human input of passwords while keeping the system very secure. You will be free from all the troubles with expect.
How do I setup Public-Key Authentication?
http://www.ece.uci.edu/~chou/ssh-key.html
If there's some reason why you cannot use pubkey, you may find sftp useful because it accepts a batch command file as -b batchfile. See man 1 sftp Not a very good solution when expect can actually split the arguments

Efficient transfer of console data, tar & gzip/ bzip2 without creating intermediary files

Linux environment. So, we have this program 't_show', when executed with an ID will write price data for that ID on the console. There is no other way to get this data.
I need to copy the price data for IDs 1-10,000 between two servers, using minimum bandwidth, minimum number of connections. On the destination server the data will be a separate file for each id with the format:
<id>.dat
Something like this would be the long-winded solution:
dest:
files=`seq 1 10000`
for id in `echo $files`;
do
./t_show $id > $id
done
tar cf - $files | nice gzip -c > dat.tar.gz
source:
scp user#source:dat.tar.gz ./
gunzip dat.tar.gz
tar xvf dat.tar
That is, write each output to its own file, compress & tar, send over network, extract.
It has the problem that I need to create a new file for each id. This takes up tonnes of space and doesn't scale well.
Is it possible to write the console output directly to a (compressed) tar archive without creating the intermediate files? Any better ideas (maybe writing compressed data directly across network, skipping tar)?
The tar archive would need to extract as I said on the destination server as a separate file for each ID.
Thanks to anyone who takes the time to help.
You could just send the data formatted in some way and parse it on the the receiver.
foo.sh on the sender:
#!/bin/bash
for (( id = 0; id <= 10000; id++ ))
do
data="$(./t_show $id)"
size=$(wc -c <<< "$data")
echo $id $size
cat <<< "$data"
done
On the receiver:
ssh -C user#server 'foo.sh'|while read file size; do
dd of="$file" bs=1 count="$size"
done
ssh -C compresses the data during transfer
You can at least tar stuff over a ssh connection:
tar -czf - inputfiles | ssh remotecomputer "tar -xzf -"
How to populate the archive without intermediary files however, I don't know.
EDIT: Ok, I suppose you could do it by writing the tar file manually. The header is specified here and doesn't seem too complicated, but that isn't exactly my idea of convenient...
I don't think this is working with a plain bash script. But you could have a look at the Archive::TAR module for perl or other scripting languages.
The Perl Module has a function add_data to create a "file" on the fly and add it to the archive for streaming accros the network.
The Documentation is found here:
You can do better without tar:
#!/bin/bash
for id in `seq 1 1000`
do
./t_show $id
done | gzip
The only difference is that you will not get the boundaries between different IDs.
Now put that in a script, say show_me_the_ids and do from the client
shh user#source ./show_me_the_ids | gunzip
And there they are!
Or either, you can specify the -C flag to compress the SSH connection and remove the gzip / gunzip uses all together.
If you are really into it you may try ssh -C, gzip -9 and other compression programs.
Personally I'll bet for lzma -9.
I would try this:
(for ID in $(seq 1 10000); do echo $ID: $(/t_show $ID); done) | ssh user#destination "ImportscriptOrProgram"
This will print "1: ValueOfID1" to standardout, which a transfered via ssh to the destination host, where you can start your importscript or program, which reads the lines from standardin.
HTH
Thanks all
I've taken the advice 'just send the data formatted in some way and parse it on the the receiver', it seems to be the consensus. Skipping tar and using ssh -C for simplicity.
Perl script. Breaks the ids into groups of 1000. IDs are source_id in hash table. All data is sent via single ssh, delimited by 'HEADER', so it writes to the appropriate file. This is a lot more efficient:
sub copy_tickserver_files {
my $self = shift;
my $cmd = 'cd tickserver/ ; ';
my $i = 1;
while ( my ($source_id, $dest_id) = each ( %{ $self->{id_translations} } ) ) {
$cmd .= qq{ echo HEADER $source_id ; ./t_show $source_id ; };
$i++;
if ( $i % 1000 == 0 ) {
$cmd = qq{ssh -C dba\#$self->{source_env}->{tickserver} " $cmd " | };
$self->copy_tickserver_files_subset( $cmd );
$cmd = 'cd tickserver/ ; ';
}
}
$cmd = qq{ssh -C dba\#$self->{source_env}->{tickserver} " $cmd " | };
$self->copy_tickserver_files_subset( $cmd );
}
sub copy_tickserver_files_subset {
my $self = shift;
my $cmd = shift;
my $output = '';
open TICKS, $cmd;
while(<TICKS>) {
if ( m{HEADER [ ] ([0-9]+) }mxs ) {
my $id = $1;
$output = "$self->{tmp_dir}/$id.ts";
close TICKSOP;
open TICKSOP, '>', $output;
next;
}
next unless $output;
print TICKSOP "$_";
}
close TICKS;
close TICKSOP;
}

Resources