Bash startup script which opens a PPP and then executes a delayed command--how? - multithreading

I have a bash script I want to run at startup (Kubuntu 11.04) which opens my company's VPN. The VPN client in KDE doesn't like this VPN, so I need to do this manually. It opens a ppp0 connection, and then adds an ip route. Here's the issue: I need to run the pppd call command, and after about 5 seconds, add the ip route. I have the commands just fine, and running these two commands separately in two different shell sessions works dandy; my problem is that the pppd call command starts the ppp0 process, and unless I want to terminate that process, I can't run the ip route command IN THE SAME SCRIPT on startup.
Can I:
(1) thread these two commands somehow?
(2) write three scripts; a controller script, the pppd script, and the ip route script?
(3) Do this without being all kludgy?

can't you just put the pppd process into the background like
function is_pppd_working {
# do some test to check pppd is doing what it's meant to
}
function setup_route {
# do your ip route stuff
}
pppd > somelogfile.log 2>&1 &
sleep 5
PPD_IS_WORKING=$(is_pppd_working)
if [ ${PPD_IS_WORKING} ]
then
setup_route
else
# uh oh!!
fi

Related

Linux script for probing ssh connection in a loop and start log command after connect

I have a host machine that gets rebooted or reconnected quite a few times.
I want to have a script running on my dev machine that continuously tries to log into that machine and if successful runs a specific command (tailing the log data).
Edit: To clarify, the connection needs to stay open. The log command keeps tailing until I stop it manually.
What I have so far
#!/bin/bash
IP=192.168.178.1
if (("$#" >= 1))
then
IP=$1
fi
LOOP=1
trap 'echo "stopping"; LOOP=0' INT
while (( $LOOP==1 ))
do
if ping -c1 $IP
then
echo "Host $IP reached"
sshpass -p 'password' ssh -o ConnectTimeout=10 -q user#$IP '<command would go here>'
else
echo "Host $IP unreachable"
fi
sleep 1
done
The LOOP flag is not really used. The script is ended via CTRL-C.
Now this works if I do NOT add a command to be executed after the ssh and instead start the log output manually. On a disconnect the script keeps probing the connection and logs back in once the host is available again.
Also when I disconnect from the host (CTRL-D) the script will log right back into the host if CTRL-C is not pressed fast enough.
When I add a command to be executed after ssh the loop is broken. So pressing (CTRL-C) does not only stop the log but also disconnects and ends the script on the dev machine.
I guess I have to spawn another shell somewhere or something like that?
1) I want the script to keep probing, log in and run a command completely automatically and fall back to probing when the connection breaks.
2) I want to be able to stop the log on the host (CTRL-C) and thereby fall back to a logged in ssh connection to use it manually.
How do I fix this?
Maybe best approach on "fixing" would be fixing requirements.
The problematic part is number "2)".
The problem is from how SIGINT works.
When triggered, it is sent to the current control group related to your terminal. Mostly this is the shell and any process started from there. With more modern shells (you seem to use bash), the shell manages control groups such that programs started in the background are disconnected (by having been assigned a different control group).
In your case the ssh is started in the foreground (from a script executed in the foreground), so it will receive the interrupt, forward it to the remote and terminate as soon as the remote end terminated. As by that time the script shell has processed its signal handler (specified by trap) it is going to exit the loop and terminate itself.
So, as you can see, you have overloaded CTRL-C to mean two things:
terminate the monitoring script
terminate the remote command and continue with whatever is specified for the remote side.
You might get closer to what you want if you drop the first effect (or at least make it more explicit). Then, calling a script on the remote side that does not terminate itself but just the tail command, will be step. In that case you will likely need to use -t switch on ssh to get a terminal allocated for allowing normal shell operation later.
This, will not allow for terminating the remote side with just CTRL-C. You always will need to exit the remote shell that is going to be run.
The essence of such a remote script might look like:
tail command
shell
of course you would need to add whatever parts will be necessary for your shell or coding style.
An alternate approach would be to keep the current remote command being terminated and add another ssh call for the case of being interrupted that is spanning the shell for interactive use. But in that case, also `CTRL-C will not be available for terminating the minoring altogether.
To achieve this you might try changing active interrupt handler with your monitoring script to trigger termination as soon as the remote side returns. However, this will cause a race condition between the user being able to recognize remote command terminated (and control has been returned to local script) and the proper interrupt handler being in place. You might be able to sufficiently lower that risk be first activating the new trap handler and then echoing the fact and maybe add a sleep to allow the user to react.
Not really sure what you are saying.
Also, you should disable PasswordAuthentication in /etc/ssh/sshd_config and log by adding the public key of your home computer to `~/.ssh/authorized_keys
! /bin/sh
while [ true ];
do
RESPONSE=`ssh -i /home/user/.ssh/id_host user#$IP 'tail /home/user/log.txt'`
echo $RESPONSE
sleep 10
done

Run a script if no SSH connection

I'm trying to get a script to run at startup, but does nothing if I've connected to my Raspberry Pi via SSH.
So far I've got the crontab to automatically run the script checkssh.sh via #reboot sleep 30 && sudo bash ./checkssh.sh and './checkssh.sh' contains this:
#!/bin/bash
if [ -n "$SSH_CLIENT" ] || [ -n "$SSH_TTY" ]; then
echo "SSH CONNECTED"
else
./autobackup.sh
fi
Running checkssh.sh from an SSH terminal returns 'SSH CONNECTED' which is expected, and letting it run automatically from the crontab at reboot when SSH isn't connected works correctly. However, when it runs at boot and I connect via SSH as soon as it's available, it still runs the script. I'm not sure where this is going wrong.
I need it to run automatically and if there's no SSH connection run autobackup.sh , but if there is an SSH connection, not to run anything. The device I use for the SSH connection may vary & the network used may also vary, so a script that relies on specific IP's isn't ideal.
Thanks for any help :)
Those environment variables (SSH_CLIENT and SSH_TTY) are only set in the environment of an SSH session. You cannot check them from another process and expect them to fulfill your goals here.
Instead, run the program finger. This is the standard way to see who is logged in.
Probably you need to add some delay before running your script to allow for the SSH service to come up. If cron service comes up before the sshd does, you will have a failure. Try:
#reboot sleep 60 && bash ./checkssh.sh
Also I would substitute the '.' with the full script path.In one scenario I had to add as many as 120 seconds to get the #reboot crontab to work right. But ssh should not need as much. I guess you can trim 60 seconds according to your needs after you get it working.

Passing Arguments to Running Bash Script

I have a bash script that takes a list of IP Addresses, and pings them every 15 seconds to test connectivity. Some of these IP Addresses are servers and computers as to which I have the ability to control. I would like to be able to do something of the following:
Run The Bash File
It pings non-controlled IP Addresses
It will list the controlled Computers
When a computer turns off, it sends my script a response saying it turned off
The script outputs accordingly
I have the code all set up that pings these computers every 15 seconds and displays. What I wish to achieve is to NOT ping my controlled computers. They will send a command to the bash script. I know this can be done by writing a file and reading such file, but I would like a way that changes the display AS IT HAPPENS. Would mkfifo be an viable option?
Yes, mkfifo is ok for this task. For instance, this:
mkfifo ./commandlist
while read f < ./commandlist; do
# Actions here
echo $f
done
will wait until a new line can be read from FIFO commandlist, read it into $f and execute the body.
From the outside, write to the FIFO with:
echo 42 > ./commandlist
But, why not let the remote server call this script, perhaps via SSH or even CGI? You can setup a /notify-disconnect CGI script with no parameters and get the IP address of the peer from the REMOTE_ADDR environment variable.

SSH Persistent Connection Timeout

I setup an ssh tunnel using a bash script, and the ssh tunnel is configured as a shared persistent connection tunnel.
At the end of my script, though, I have it setup to invoke a close command against the tunnel and to delete the .ssh/config file so that it doesn't remain open and nor does subsequent ssh tunnels that are manually started by a user.
Question is this... what is the best way to handle this issue of making sure the tunnel is closed in case someone ctrl+c the script or it crashes for some reason in the middle of the script before it invokes the close command and deletes the config file? I was going to add a timeout to the control master, but I cannot determine what I need to use based on my readings in the ssh_config man page.
Try to use trap:
#!/bin/bash
on_sigint(){
echo this function is called on ctrl+c
}
trap "on_sigint" SIGINT SIGTERM
echo start
# Do what you want
...
echo stop

Changing network ip in a bash script started from a NFS-mounted folder

I wrote a simple Bash script to change the network address of a Linux Host:
#!/bin/sh
REMOTE_HOST=192.168.2.127 # Default Host address
NEW_IP=192.168.30.33 # New IP I want to set
NEW_GW=192.168.30.1 # New Gateway I want to set
sudo ifconfig eth0 192.168.2.1 # Moving to the right network...
#ping $REMOTE_HOST -c 3 # I can correctly ping the host here...
ssh-copy-id root#${REMOTE_HOST} > /dev/null # ...for my comfort...
# Setting the network with new values for the IP addr and the GW...
COMMAND="sed -i 's#address *\\([0-9.]\\+\\)#address ${NEW_IP}#' /etc/network/interfaces\
&& sed -i 's#gateway *\\([0-9.]\\+\\)#gateway ${NEW_GW}#' /etc/network/interfaces"
ssh root#${REMOTE_HOST} $COMMAND
# done!
# Now restart the network services:
ssh root#${REMOTE_HOST} "/etc/init.d/networking restart &" & # (Note the 2nd '&' !!!)
# Come back to my old IP
sudo ifconfig eth0 192.168.30.10
sudo route add default gw 192.168.30.1
This script works almost perfectly but:
1) If I run it from my home folder, no problems; if I run it from a NFS shared folder the script hangs for a minute or two before to end correctly
2) If I omit the second '&' when restarting the network on the host the command never returns...
The questions are:
1) What causes the long wait (NFS, different IP address, different gateway)? And is it possible to workaround it?
2) Why it happens? How could I avoid it?
Thanks for any kind of help and sorry for my bad English!
You're restarting networking services, which drops all active connections.
Bash reads the file you're running line by line. Since NFS is a Network File System, this will terminate the connection to the file. So the system waits (can't actually) with executing the lines after networking restart until the connection is re-established.
Instead, you should first make a local copy of the entire script and then run it locally.
You could also code a script for that ;-)

Resources