Execute the last lines of shell script after SSH diconnection - linux

I have a shell script that is used to setup the network settings on a linux machine in bash, it will mainly be used over SSH. Here are the last few lines of the script.
service network stop
rm -rf $NETWORKFILE
touch $NETWORKFILE
echo NETWORKING=yes > $NETWORKFILE
echo HOSTNAME=$HOSTNAME >> $NETWORKFILE
mv $ETHFILE /etc/sysconfig/network-scripts/ifcfg-eth0
service network start
As you can see, to apply the network settings it has to stop then start the network and apply the settings while the network is down. This would then cause the SSH session to be disconnected on the first line of the code I have shown and the script to thus stop and the settings to not be applied. How can I have the shell script run these last few lines after the SSH session is disconnected that started the shell script? Also, it needs to be done in the code and not through a screen or nohup command when starting the script.

Try completely disconnecting the script from the terminal, by redirecting all standard streams and putting it in background:
nohup script < /dev/null > script.log 2>&1 &
Also you can put "sleep 2" as the first line of the script, so that after putting the script in background, you can quickly disconnect cleanly, before the server closes it forcibly. This is just for convenience.

Maybe if you get your script PID, and then disown the process after stopping the network, your script will continue running after the ssh session is disconnected.
./script &
pid=$$
disown -h $pid
service network stop
rm -rf $NETWORKFILE
touch $NETWORKFILE
echo NETWORKING=yes > $NETWORKFILE
echo HOSTNAME=$HOSTNAME >> $NETWORKFILE
mv $ETHFILE /etc/sysconfig/network-scripts/ifcfg-eth0
service network start

Related

Why doesn't tcpdump run in background?

I logged in a virtual machine via ssh and I tried to run a script in background, the script is shown below:
#!/bin/bash
APP_NAME=`basename $0`
CFG_FILE=$1
. $CFG_FILE #just some variables
CMD=$2
PID_FILE="$PIDS_DIR/$APP_NAME.pid"
CUR_LOG_DIR=$LOGS_RUNNING
echo $$ > $PID_FILE
#Main script code
#This script shall be called using the following syntax
# $ nohup script_name output_dir &
TIMESTAMP=`date +"%Y%m%d%H%M%S"`
CAP_INTERFACE="eth0"
/usr/sbin/tcpdump -nei $CAP_INTERFACE -s 65535 -w file_result
rm $PID_FILE
The result should be tcpdump running in background, redirecting the command result to file_result.
The script is called with:
nohup $SCRIPT_NAME $CFG_FILE start &
And It is stopped calling the STOP_SCRIPT:
##STOP_SCRIPT
PID_FILE="$PIDS_DIR/$APP_NAME.pid"
if [ -f $PID_FILE ]
then
PID=`cat $PID_FILE`
# send SIGTERM to kill all children of $PID
pkill -TERM -P $PID
fi
When I check the file_result, after running the stop script, It is empty.
What is happening? How can I solve it?
I found this link: https://it.toolbox.com/question/launching-tcpdump-processes-in-background-using-ssh-060614
The author seems to have faced a similar issue. They debate about race conditions, but I didn't understand completely.
I'm not sure what you're trying to accomplish by having the startup script itself continue to run, but here's an approach that I think accomplishes what you're trying to do, namely start tcpdump and have it continue to run immune to hangups via nohup. I've simplified things a bit for illustrative purposes - feel free to add any variables back as you see fit, such as the nohup.out output directory, TIMESTAMP, etc.
Script #1: tcpdump_start.sh
#!/bin/sh
rm -f nohup.out
nohup /usr/sbin/tcpdump -ni eth0 -s 65535 -w file_result.pcap &
# Write tcpdump's PID to a file
echo $! > /var/run/tcpdump.pid
Script #2: tcpdump_stop.sh
#!/bin/sh
if [ -f /var/run/tcpdump.pid ]
then
kill `cat /var/run/tcpdump.pid`
echo tcpdump `cat /var/run/tcpdump.pid` killed.
rm -f /var/run/tcpdump.pid
else
echo tcpdump not running.
fi
To start tcpdump, just run tcpdump_start.sh.
To stop the tcpdump instance started with tcpdump_start.sh, just run tcpdump_stop.sh.
The captured packets will be written to the file_result.pcap file, and yes, it's a pcap file, not a text file, so it helps to name it with the proper file extension. The tcpdump statistics will be written to the nohup.out file when tcpdump is terminated.
I too had faced problems when running tcpdump over an SSH session.
In my case, I was running
sudo nohup tcpdump -w {pcap_dump_file} {filter} > /dev/null 2>&1 &
Where, running this command over Paramiko SSH session as a background process was the problem.
To get around this, I used screen utility of Linux.
screen is an easy to use tool for long-running of processes as a service.
Might be an old post, but this is also relevant. I couldn;t understand why no file was being created only to realise that the file might not be created until a certain amount of data had been captured.
https://github.com/the-tcpdump-group/tcpdump/issues/485

How to execute a local bash script on remote server via ssh with nohup

I can run a local script on a remote server using the -s option, like so:
# run a local script without nohup...
ssh $SSH_USER#$SSH_HOST "bash -s" < myLocalScript.sh;
And I can run a remote script using nohup, like so:
# run a script on server with nohup...
ssh $SSH_USER#$SSH_HOST "nohup bash myRemoteScript.sh > results.out 2>&1 &"
But can I run my local script with nohup on the remote server? I expect the script to take many hours to complete so I need something like nohup. I know I can copy the script to the server and execute it but then I have to make sure I delete it once the script is complete, would rather not have to do that if possible.
I've tried the following but it's not working:
# run a local script without nohup...
ssh $SSH_USER#$SSH_HOST "nohup bash -s > results.out 2>&1 &" < myLocalScript.sh;
You shouldn't have to do anything special - Once you kick off a script on another machine, it should finish running even if you terminate the connection:
For example
ssh $SSH_USER#$SSH_HOST "bash -s > results.out 2>&1" < myLocalScript.sh &
# Please wait a few seconds for the connection to be established
kill $! # Optional: Kill the last process
If you want to test it, try a simple script like this
# myLocalScript.sh
echo 'File created - sleeping'
sleep 30
echo 'Finally done!'
The results.out file should immediately be created on the other machine with "File created - sleeping" in it. You can actually kill the local ssh process with kill <your_pid>, and it will still keep running on the other machine, and after 30 seconds, print "Finally done!" into the file, and exit.

Wifi disconnected before init.d script is run

I've set up a simple init.d script "S3logrotate" to run on shutdown. The "S3logrotate" script works fine when run manually from command line but the script does not function correctly on shut down.
The script uploads logs from my PC to an Amazon S3 bucket and requires wifi to run correctly.
Debugging proved that the script is actually run but the upload process fails.
I found that the problem seems to be that the script seems to run after wifi is terminated.
These are the blocks I used to test my internet connection in the script.
if ping -q -c 1 -W 1 8.8.8.8 >/dev/null; then
echo "IPv4 is up" >> *x.txt*
else
echo "IPv4 is down" >> *x.txt*
fi
if ping -q -c 1 -W 1 google.com >/dev/null; then
echo "The network is up" >> *x.txt*
else
echo "The network is down" >> *x.txt*
fi
The output for this block is:
IPv4 is down
The network is down
Is there any way to set the priority of an init.d script? As in, can I make my script run before the network connection is terminated? If not, is there any alternative to init.d?
I use Ubuntu 16.04 and have dual booted with Windows 10 if that's significant.
Thanks,
sganesan7
You should place you scrip in:
/etc/NetworkManager/dispatcher.d/pre-down.d
change group and owner to root
chown root:root S3logrotate
and it should work. If you need to do this for separate interface place script in
create a script inside
/etc/NetworkManager/dispatcher.d/
and name it (for example):
wlan0-down
and should work too.

ssh does not return even after execution

The following ssh command does not return to terminal. It hangs though the execution is completed. The execution hangs after echo hi command.
ssh user#testserver "echo hello;source .profile;source .bash_profile;/apps/myapp/deploytools/ciInstallAndRun.sh; echo hi"
Output
hello
<outoutfrom remote script"
hi
ciInstallAndRun.sh
echo 'starting'
cd /apps/myapp/current
./tctl kill
cd /apps/myapp
mv myapp_v1.0 "myapp_v1.0_`date '+%Y%m%d%H%M'`"
unzip -o /apps/myapp/myappdist-bin.zip
java -classpath .:/apps/myapp/deploytools/cleanup.jar se.telenor.project.cleanup.Cleanup /apps/myapp myapp_v1.0_ 3
cd /apps/myapp/myapp_v1.0
echo 'Done with deploy'
chmod -R 775 *
echo 'Done'
./tctl start test
Source OS: Redhat
Dest Os: Solaris 10 8/07
Any idea to fix this.
Any idea to fix this.
Your installation script has spawned a child process.
Add a ps -f or ptree $$ command before echo hi. You'll see a child process or multiple child processes spawned by your install script.
To stop the SSH command from hanging, you need to detach such child process(es) from your terminal's input/output. You can sedirect your script's output to a file - both stdout and stderr with > /some/output/file 2>&1, and also redirect its input with < /dev/null.
Or you can use the nohup command.
You haven't provided an MCVE, as others have noted, but this is likely the problem command in you install script, since your question implies that you see the expected output from your install script:
./tctl start test
You probably would do better to replace it with something like:
./tctl start test </dev/null >/some/log/file/path.log 2>&1

Kill ssh or\and remote process from bash script

I am trying to run the following command as part of the bash script which suppose to open ssh channel, run the program on the remote machine, save the output to the file for 10 sec, kill the process, which was writing to the file and then give the control back to bash script.
#!/bin/bash
ssh hostname '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null; sshpid=!$; sleep 10; kill -9 $sshpid 2>/dev/null &'
Unfortunately, what it seems to be doing is starting the program: nodes-listener remotely, but it never gets any further and it doesn't give control to the bash script. So, the only way to stop the execution is to do Ctrl+C.
Killing ssh doesn't help (or rather can't be executed) since the control is not with bash script as it waits for the command within the ssh session to complete, which of course never happens as it has to be killed to stop.
Here's the command line that you're running on the remote system:
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null
sshpid=!$
sleep 10
kill -9 $sshpid 2>/dev/null &
You should change it to this:
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null & <-- Ampersand goes here
sshpid=$!
sleep 10
kill -9 $sshpid 2>/dev/null
You want to start nodes-listener and then kill it after ten seconds. To do this, you need to start nodes-listener as a background process, so that the shell which is executing this command line to move on to the next command after starting nodes-listener. The & in your command line is in the wrong place, and would apply only to the kill command. You need to apply it to the nodes-listener command.
I'll also note that your sshpid=!$ line was incorrect. You want sshpid=$!. $! is the process ID of the last command started in the background.
You need to place the ampersand after the first command, then put the remaining commands onto the next line:
ssh hostname -- '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sshpid=$!; sleep 10; kill $sshpid 2>/dev/null'
Btw, ssh is returning after all commands had been executed. This does mean it will close the allocated pty as well. If there are still background jobs running in that shell session, they would being killed by SIGHUP. This means, you can probably omit the explicit kill command. (Depends on whether nodes-listener handles SIGHUP and SIGTERM differently). Having this, you could simplify the code to the following:
ssh hostname -- sh -c '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sleep 10'
I have resolved this by pushing the shell script to the remote machine and executing it there. It is actually less tidy and relies on space being available on the remote computer.
Since my remote machine is a small physical device, the issue of the space usage is important (even for the tiny amount of space required in this case).
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sshpid=!$
sleep 20
sync
# killing nodes-listener process and giving control back to the base bash
killall -9 nodes-listener 2>/dev/null && echo "nodes-listener is killed"

Resources