How to wait for first rsync process to complete before running next command in shell/bash script - linux

Below is the script I have. Basically I just want to copy files from the other server by calling out this script. Some files are large and what happens is that it will kill the first rsync command before it completes and proceed with the next. I tried to use screen command but I'm not sure how to code Ctrl+a d (to detach) in shell/bash.
HFDIR=/var/opt/ubkp/data/local/prework/hotfixes
RODIR=/var/opt/ubkp/data/local/prework/rollouts
THFDIR=$(ls -t /var/opt/ubkp/data/local | grep hotfix | head -1)
TRODIR=$(ls -t /var/opt/ubkp/data/local | grep rollout | grep -v check | head -1)
user=$(/usr/seos/bin/sewhoami)
if [ $user = "root" ]; then
echo "This script should not be run as the TRUE root user"
echo "Log in so that \"sewhoami\" does not display \"root\" and then execute this script."
exit
else
#list of ROs and HFs
list=/tmp/list.txt
echo -n "Enter Password: "
read -s PWD
# first rsync command
/usr/bin/expect<<EOD
spawn rsync -a $user#server:$HFDIR/* /var/opt/ubkp/data/local/$THFDIR
expect "assword"
send "$PWD\r"
wait $!
expect eof
EOD
# second rsync command
/usr/bin/expect<<EOD
spawn rsync -a $user#server:$RODIR/* /var/opt/ubkp/data/local/$TRODIR
expect "assword"
send "$PWD\r"
expect eof
EOD
fi
exit

Your second rsync will be killed after 10 seconds as that is the default timeout for expect eof. You should add a wait after the send, to wait forever until the process ends.
Also, your should remove the $! in the wait. It is a shell variable, not an expect variable. Fortunately in this case $! is empty because you have not run any commands in the shell in the background with &.

Related

Shell Script is not generating the logs file

I am trying to capture the netstat command logs for every minute.I have written a script which runs in loop.But my script executes till capturing logs statement into test.sh code.
test.sh
#!/bin/sh
export TODAY=`date`
export i=0
while [ true ]
do
echo "capturing logs" $i
sh test1.sh > test$i.log
echo "sleeping for 1m"
sleep 60
i=$((i+1))
done
test1.sh
#!/bin/sh
netstat -l 5575 | while IFS= read -r line; do printf '[%s] %s\n' "$(date '+%Y-%m-%d %H:%M:%S')" "$line"; done
The output from above script is :
capturing logs
(If i press crtl-c then it move further and it display "sleeping for 1m" statement and i need to press again crtl-c when it comes to "capturing logs statement").
sh test1.sh > test$i.log
Is waiting for test1.sh to finish, which probably takes way too long to complete.
Try to execute test1.sh in another tty like
setsid sh -c 'exec [launch the script] <> /dev/tty[number_of_tty] >&0 2>&1'
and let me know.
Be careful not to run a lot of processes on the same tty. You can play with [number_of_tty] to avoid this.
Could solve the problem, could not, but it's worth trying.

TCL Expect kills child of child process

I have an expect script like this
#!/usr/bin/expect -f
set timeout 30
log_user 0
set PASSWORD $::env(PASSWORD)
set USERNAME $::env(USERNAME)
set TOKEN $::env(TOKEN)
puts stderr "Generating OTP"
spawn oathtool --totp $TOKEN
expect -re \\d+
set otp $expect_out(0,string)
puts stderr "Connecting to VPN server"
spawn -ignore HUP env openconnect -b https://vpn
expect "GROUP:"
send "Tech\n"
expect "Username:"
send "$USERNAME\n"
expect "Password:"
send "$PASSWORD\n"
expect "Password:"
send "$otp\n"
expect EOF
This simple script provides user and password to openconnect to make a new VPN connection in background, but it wont work because the children spawned processes are killed by expect. As you may know, expect will send SIGHUP signal before finish, I was trying to workaround it but even when I put the -ignore HUP flag, it is killing the underlying process, I would like to end my script but the underlying openconnect in background survive.
Do you know what is lacking here?
Take into account that openconnect -b will spawn other PID by its own.
The following method using 2 batch files worked for me:
The -b flag in openconnect is not used and kill command is used instead to send openconnect to background.
contents of file named vpn2:
#!/usr/bin/expect -f
set timeout -1
spawn -ignore HUP -noecho /root/bin/v2vpn2
expect "password"
sleep 3
send -- "my_password\r"
expect "SMS OTP"
interact
expect "Established"
expect eof
contents of file named v2vpn2:
rm /var/log/vpn2.log > /dev/null 2>&1
touch /var/log/vpn2.log
# the word password is printed twice and so filtering here
tail -f /var/log/vpn2.log | grep -m2 -wo "password" | sed '2q;d' &
tail -f /var/log/vpn2.log | grep --color=never -wo "SMS OTP" &
while /bin/true; do
grep -q "Established" /var/log/vpn2.log
if (( $? == 0 )); then
kill -STOP `pgrep openconnect`
kill -CONT `pgrep openconnect`
pkill vpn2
exit
fi
done &
openconnect -u "my_user_name" my_vpn_url >> /var/log/vpn2.log 2>&1
After spending too much time on this, I solved it by adding
expect -timeout -1 -ex "Client killed"
and calling script with &
./vpn.exp &

How can I quit the bash when I acident run while with pipe

I run the this command
$(while true;do echo Something && sleep 0.01; done;) | cat
Now I can not exit by Ctrl+C or background it by Ctrl+Z, and ps aux can't tell me which bash it is.How can I quit that bash ?
EDIT
I narrow donw the pid by find the cwd pgrep bash| (while read -r line; do lsof -p $line|grep cwd|grep EXPECTED_CWD && echo "GOT $line"; done;),finally kill that process.There is easier way to find that, but no /proc on mac.
Closing the terminal can help.
If you are using CLI-only os then you can switch to different terminal using Ctrl + Alt + F1 keys.
There you can use who or ps commands to get process-id and kill it.
ps -ef will list all processes.
you can kill second last bash process, as that will be the last command that you executed.
Note : process id assignment in linux is always greater than old one.
you're able to execute your commands with & symbol in the end of the command to run this command in background
In your case command looks like:
$(while true;do echo Something && sleep 0.01; done;) | cat &
It will show the process ID, and you'll be able to kill it, when you want

Bash: Using SSH to start a long-running remote command and collect its PID

When I do the following, then I have to press CTRL-c afterwards or the shell acts weird. Left/right arrows keys e.g. doesn't move correctly and the text is messed up.
# read -r pid < <(ssh 10.10.10.46 'sleep 50 & echo $!') ; echo $pid
2135
# Killed by signal 2.
^C
#
I need this for a script, so I'd like to know why CTRL-c is needed and is it possible to work around it?
Update
It looks like it opens an extra Bash shell, and that is the one that needs to be exited.
The command I am actually interesting in is
read -r pid < <(ssh 10.10.10.46 "mbuffer -4 -v 0 -q -I 8023 > /tmp/mtest & echo $!"); echo $pid
Try this instead:
read -r pid \
< <(ssh 10.10.10.46 'nohup mbuffer >/tmp/mtest </dev/null 2>/tmp/mtest.err & echo $!')
Three important changes:
Use of nohup (you could also get a similar effect with the bash built-in disown)
Redirection of stdin and stderr to files (preventing them from holding handles that connect, eventually, to your terminal).
Use of single quotes for the remote command (with double-quotes, expansions happen before ssh is started, so the $! you get is the PID of the most recently started local background process).

terminate infinite loop initiated in remote server when exiting bash script

Script which executes commands in infinite loop in background
<SOMETHING ELSE AT START OF SCRIPT>
cmd='while true;
do
ps aux | head;
sleep 1;
done > $FILE'
ssh root#$SERVER $cmd &
...
...
<SOME OTHER TASKS>
...
...
( at the end of this script, how to kill the above snippet executing in remote server)
[ kindly note i dont want to wait as the while loop is infinite ]
Read and tried some posts from stackoverflow, but could not find exact solution for this problem.
Rather than an infinite loop, use a sentinel file:
cmd='while [ -r /tmp/somefile];
do
# stuff
done > $FILE'
ssh root#$SERVER touch /tmp/somefile
ssh root#$SERVER $cmd &
# do other stuff
ssh root#$SERVER rm -f /tmp/somefile
This follows your current practice of putting the remote command in a variable, but the arguments against that cited elsewhere should be considered.
If you want to kill the ssh process running in background at the end of your script, just do:
kill $!
I assume this is the only (or the last) process you started in background.
Try following sequence
CTRL+Z
fg
CTRL+C
or
jobs
kill %jobspec
To kill everything belonging to user logged in you could try:
whois=`w|grep $user|awk '{print $2}'`;user=root; ssh $user#server -C "ps auwx|grep $whois|awk '{print \$2}'"
This will list all the processes owned by the user you just logged in as - just add |xargs kill -9
whois=`w|grep $user|awk '{print $2}'`;user=root; ssh $user#server -C "ps auwx|grep $whois|awk '{print \$2}'|xargs kill -9 "
whois=`w|grep $user|awk '{print $2}'`;user=root; ssh $user#server -C "ps auwx|grep $whois|awk '{print \$2}'|awk '{print "kill -9 " $1}'|/bin/sh "

Resources