I have an expect script like this
#!/usr/bin/expect -f
set timeout 30
log_user 0
set PASSWORD $::env(PASSWORD)
set USERNAME $::env(USERNAME)
set TOKEN $::env(TOKEN)
puts stderr "Generating OTP"
spawn oathtool --totp $TOKEN
expect -re \\d+
set otp $expect_out(0,string)
puts stderr "Connecting to VPN server"
spawn -ignore HUP env openconnect -b https://vpn
expect "GROUP:"
send "Tech\n"
expect "Username:"
send "$USERNAME\n"
expect "Password:"
send "$PASSWORD\n"
expect "Password:"
send "$otp\n"
expect EOF
This simple script provides user and password to openconnect to make a new VPN connection in background, but it wont work because the children spawned processes are killed by expect. As you may know, expect will send SIGHUP signal before finish, I was trying to workaround it but even when I put the -ignore HUP flag, it is killing the underlying process, I would like to end my script but the underlying openconnect in background survive.
Do you know what is lacking here?
Take into account that openconnect -b will spawn other PID by its own.
The following method using 2 batch files worked for me:
The -b flag in openconnect is not used and kill command is used instead to send openconnect to background.
contents of file named vpn2:
#!/usr/bin/expect -f
set timeout -1
spawn -ignore HUP -noecho /root/bin/v2vpn2
expect "password"
sleep 3
send -- "my_password\r"
expect "SMS OTP"
interact
expect "Established"
expect eof
contents of file named v2vpn2:
rm /var/log/vpn2.log > /dev/null 2>&1
touch /var/log/vpn2.log
# the word password is printed twice and so filtering here
tail -f /var/log/vpn2.log | grep -m2 -wo "password" | sed '2q;d' &
tail -f /var/log/vpn2.log | grep --color=never -wo "SMS OTP" &
while /bin/true; do
grep -q "Established" /var/log/vpn2.log
if (( $? == 0 )); then
kill -STOP `pgrep openconnect`
kill -CONT `pgrep openconnect`
pkill vpn2
exit
fi
done &
openconnect -u "my_user_name" my_vpn_url >> /var/log/vpn2.log 2>&1
After spending too much time on this, I solved it by adding
expect -timeout -1 -ex "Client killed"
and calling script with &
./vpn.exp &
Related
Below is the script I have. Basically I just want to copy files from the other server by calling out this script. Some files are large and what happens is that it will kill the first rsync command before it completes and proceed with the next. I tried to use screen command but I'm not sure how to code Ctrl+a d (to detach) in shell/bash.
HFDIR=/var/opt/ubkp/data/local/prework/hotfixes
RODIR=/var/opt/ubkp/data/local/prework/rollouts
THFDIR=$(ls -t /var/opt/ubkp/data/local | grep hotfix | head -1)
TRODIR=$(ls -t /var/opt/ubkp/data/local | grep rollout | grep -v check | head -1)
user=$(/usr/seos/bin/sewhoami)
if [ $user = "root" ]; then
echo "This script should not be run as the TRUE root user"
echo "Log in so that \"sewhoami\" does not display \"root\" and then execute this script."
exit
else
#list of ROs and HFs
list=/tmp/list.txt
echo -n "Enter Password: "
read -s PWD
# first rsync command
/usr/bin/expect<<EOD
spawn rsync -a $user#server:$HFDIR/* /var/opt/ubkp/data/local/$THFDIR
expect "assword"
send "$PWD\r"
wait $!
expect eof
EOD
# second rsync command
/usr/bin/expect<<EOD
spawn rsync -a $user#server:$RODIR/* /var/opt/ubkp/data/local/$TRODIR
expect "assword"
send "$PWD\r"
expect eof
EOD
fi
exit
Your second rsync will be killed after 10 seconds as that is the default timeout for expect eof. You should add a wait after the send, to wait forever until the process ends.
Also, your should remove the $! in the wait. It is a shell variable, not an expect variable. Fortunately in this case $! is empty because you have not run any commands in the shell in the background with &.
I have a bash script that is supposed to run periodically. the script should connect to a remote SFTP server and get a file from there.
Since this is a SFTP server I had to use expect with the bash script.
the script runs well when I run it manually but fails when running via crontab.
the problematic function is the get_JSON_file()
please advise...
this is the code:
#!/bin/bash
export xxxxx
export xxxxx
export PATH=xxxxx
check_if_file_is_open(){
while :
do
if ! [[ `lsof | grep file.txt` ]]
then
break
fi
sleep 1
done
}
get_JSON_file(){
/usr/bin/expect -f <(cat << EOF
spawn sftp -P port user#ip
expect "Password:"
send "password\r"
expect "$ "
send "get path/to/file/file.json\r"
send "exit\r"
interact
EOF
)
}
get_JSON_file
check_if_file_is_open
cp file.txt /path/to/destination/folder
Expect's interact works only when stdin is on a tty/pty but cron job is not running on tty/pty. So replace interact with expect eof (or expect -timeout 12345 eof if necessary).
That's a very awkward way to pass expect commands to the expect interpreter. Use a (quoted) heredoc instead, and you would drop the -f option for expect
get_JSON_file(){
/usr/bin/expect <<'EOF'
spawn sftp -P port user#ip
expect "Password:"
send "password\r"
expect "$ "
send "get path/to/file/file.json\r"
send "exit\r"
expect eof
EOF
}
The most important tip for debugging expect scripts is to invoke expect's debug output. While you're working out the kinks, use
expect -d <<'EOF'
and in the crontab, you'd want to redirect stderr to stdout so you get the debugging output
* * * * * /path/to/script.sh 2>&1
To run a function within a shell script, no parentheses should be used.
Your code then becomes:
#!/bin/bash
export xxxxx
export xxxxx
export PATH=xxxxx
function check_if_file_is_open(){
while :
do
if ! [[ `lsof | grep file.txt` ]]
then
break
fi
sleep 1
done
}
function get_JSON_file(){
/usr/bin/expect -f <(cat << EOF
spawn sftp -P port user#ip
expect "Password:"
send "password\r"
expect "$ "
send "get path/to/file/file.json\r"
send "exit\r"
interact
EOF
)
}
get_JSON_file
check_if_file_is_open
cp file.txt /path/to/destination/folder
create a new script with screen command and add it in crontab
new_script.sh
#!/bin/bash
cd script_path
screen -dm -S screen_name ./your_script.sh
Introduction
My question is very similar to this one, except that I'd like the output from the command to be redirected to a local file instead of a remote one.
The questioner was asking for a way to retrieve the process ID with a command similar to this one, where the mbuffer command wouldn't cause hanging:
read -r pid < <(ssh 10.10.10.46 "mbuffer -4 -v 0 -q -I 8023 > /tmp/mtest & echo $!"); echo $pid
The answerer responded with the following command to resolve the problem
read -r pid \
< <(ssh 10.10.10.46 'nohup mbuffer >/tmp/mtest </dev/null 2>/tmp/mtest.err & echo $!')
Which is really helpful but still places files on the remote machine, not the local one.
My Attempts
The following is my attempt to capture a log of the output of $command:
read -r PID < <(ssh $remote 'nohup $command >&2 & echo $!' 2> $log)
Which sets PID to the process ID properly but doesn't produce a log.
Question
How can I capture a log on my local machine of the stdout of my $command while still assigning PID to the process ID of $command?
Another approach:
{ read -r pid;
# Do whatever you want with $pid of the process on remote machine
cat > my_local_system_log_file
} <(ssh 10.10.10.46 "mkfifo /tmp/mtest; mbuffer -4 -v 0 -q -I 8023 &> /tmp/mtest & echo $!; cat /tmp/mtest");
Basically, the first line is PID & further lines are logs from the process.
When I do the following, then I have to press CTRL-c afterwards or the shell acts weird. Left/right arrows keys e.g. doesn't move correctly and the text is messed up.
# read -r pid < <(ssh 10.10.10.46 'sleep 50 & echo $!') ; echo $pid
2135
# Killed by signal 2.
^C
#
I need this for a script, so I'd like to know why CTRL-c is needed and is it possible to work around it?
Update
It looks like it opens an extra Bash shell, and that is the one that needs to be exited.
The command I am actually interesting in is
read -r pid < <(ssh 10.10.10.46 "mbuffer -4 -v 0 -q -I 8023 > /tmp/mtest & echo $!"); echo $pid
Try this instead:
read -r pid \
< <(ssh 10.10.10.46 'nohup mbuffer >/tmp/mtest </dev/null 2>/tmp/mtest.err & echo $!')
Three important changes:
Use of nohup (you could also get a similar effect with the bash built-in disown)
Redirection of stdin and stderr to files (preventing them from holding handles that connect, eventually, to your terminal).
Use of single quotes for the remote command (with double-quotes, expansions happen before ssh is started, so the $! you get is the PID of the most recently started local background process).
Script which executes commands in infinite loop in background
<SOMETHING ELSE AT START OF SCRIPT>
cmd='while true;
do
ps aux | head;
sleep 1;
done > $FILE'
ssh root#$SERVER $cmd &
...
...
<SOME OTHER TASKS>
...
...
( at the end of this script, how to kill the above snippet executing in remote server)
[ kindly note i dont want to wait as the while loop is infinite ]
Read and tried some posts from stackoverflow, but could not find exact solution for this problem.
Rather than an infinite loop, use a sentinel file:
cmd='while [ -r /tmp/somefile];
do
# stuff
done > $FILE'
ssh root#$SERVER touch /tmp/somefile
ssh root#$SERVER $cmd &
# do other stuff
ssh root#$SERVER rm -f /tmp/somefile
This follows your current practice of putting the remote command in a variable, but the arguments against that cited elsewhere should be considered.
If you want to kill the ssh process running in background at the end of your script, just do:
kill $!
I assume this is the only (or the last) process you started in background.
Try following sequence
CTRL+Z
fg
CTRL+C
or
jobs
kill %jobspec
To kill everything belonging to user logged in you could try:
whois=`w|grep $user|awk '{print $2}'`;user=root; ssh $user#server -C "ps auwx|grep $whois|awk '{print \$2}'"
This will list all the processes owned by the user you just logged in as - just add |xargs kill -9
whois=`w|grep $user|awk '{print $2}'`;user=root; ssh $user#server -C "ps auwx|grep $whois|awk '{print \$2}'|xargs kill -9 "
whois=`w|grep $user|awk '{print $2}'`;user=root; ssh $user#server -C "ps auwx|grep $whois|awk '{print \$2}'|awk '{print "kill -9 " $1}'|/bin/sh "