Passing Arguments to Running Bash Script - linux

I have a bash script that takes a list of IP Addresses, and pings them every 15 seconds to test connectivity. Some of these IP Addresses are servers and computers as to which I have the ability to control. I would like to be able to do something of the following:
Run The Bash File
It pings non-controlled IP Addresses
It will list the controlled Computers
When a computer turns off, it sends my script a response saying it turned off
The script outputs accordingly
I have the code all set up that pings these computers every 15 seconds and displays. What I wish to achieve is to NOT ping my controlled computers. They will send a command to the bash script. I know this can be done by writing a file and reading such file, but I would like a way that changes the display AS IT HAPPENS. Would mkfifo be an viable option?

Yes, mkfifo is ok for this task. For instance, this:
mkfifo ./commandlist
while read f < ./commandlist; do
# Actions here
echo $f
done
will wait until a new line can be read from FIFO commandlist, read it into $f and execute the body.
From the outside, write to the FIFO with:
echo 42 > ./commandlist
But, why not let the remote server call this script, perhaps via SSH or even CGI? You can setup a /notify-disconnect CGI script with no parameters and get the IP address of the peer from the REMOTE_ADDR environment variable.

Related

Bash or systemd method to run shell script continuously after current task is finished

I'm using a program to scan huge IP segment (/8 IP address) and save the output in Linux directory.
To run the program, I'm writing a script namely scan.sh that look like this
#!/bin/sh
while IFS= read -r IP || [[ -N "$IP" ]]; do
/path/to/program $IP > /output/path/$IP.xml;
done < ip.txt
Since the IP address is so huge, it is not feasible to fit all IP in one file so I split it by chunk and
run the script concurrently like this. Let's name this allscan.sh
#!/bin/bash
/path/script/0-4/scan.sh & /path/script/5-10/scan.sh /path/script/11-15/scan.sh
Obviously, the script includes codes that's much longer than above but you get the idea. 0-4, 5-10 folder representing ip address that split into small chunk.
During first run, it took 28 days to finish.
My question is how to keep it running continuously once current task is finished?
I don't think monthly cronjob is suitable for this because if the task took more than 30 days to finish it will rerun the script and create unnecessary load to the server.

Linux script for probing ssh connection in a loop and start log command after connect

I have a host machine that gets rebooted or reconnected quite a few times.
I want to have a script running on my dev machine that continuously tries to log into that machine and if successful runs a specific command (tailing the log data).
Edit: To clarify, the connection needs to stay open. The log command keeps tailing until I stop it manually.
What I have so far
#!/bin/bash
IP=192.168.178.1
if (("$#" >= 1))
then
IP=$1
fi
LOOP=1
trap 'echo "stopping"; LOOP=0' INT
while (( $LOOP==1 ))
do
if ping -c1 $IP
then
echo "Host $IP reached"
sshpass -p 'password' ssh -o ConnectTimeout=10 -q user#$IP '<command would go here>'
else
echo "Host $IP unreachable"
fi
sleep 1
done
The LOOP flag is not really used. The script is ended via CTRL-C.
Now this works if I do NOT add a command to be executed after the ssh and instead start the log output manually. On a disconnect the script keeps probing the connection and logs back in once the host is available again.
Also when I disconnect from the host (CTRL-D) the script will log right back into the host if CTRL-C is not pressed fast enough.
When I add a command to be executed after ssh the loop is broken. So pressing (CTRL-C) does not only stop the log but also disconnects and ends the script on the dev machine.
I guess I have to spawn another shell somewhere or something like that?
1) I want the script to keep probing, log in and run a command completely automatically and fall back to probing when the connection breaks.
2) I want to be able to stop the log on the host (CTRL-C) and thereby fall back to a logged in ssh connection to use it manually.
How do I fix this?
Maybe best approach on "fixing" would be fixing requirements.
The problematic part is number "2)".
The problem is from how SIGINT works.
When triggered, it is sent to the current control group related to your terminal. Mostly this is the shell and any process started from there. With more modern shells (you seem to use bash), the shell manages control groups such that programs started in the background are disconnected (by having been assigned a different control group).
In your case the ssh is started in the foreground (from a script executed in the foreground), so it will receive the interrupt, forward it to the remote and terminate as soon as the remote end terminated. As by that time the script shell has processed its signal handler (specified by trap) it is going to exit the loop and terminate itself.
So, as you can see, you have overloaded CTRL-C to mean two things:
terminate the monitoring script
terminate the remote command and continue with whatever is specified for the remote side.
You might get closer to what you want if you drop the first effect (or at least make it more explicit). Then, calling a script on the remote side that does not terminate itself but just the tail command, will be step. In that case you will likely need to use -t switch on ssh to get a terminal allocated for allowing normal shell operation later.
This, will not allow for terminating the remote side with just CTRL-C. You always will need to exit the remote shell that is going to be run.
The essence of such a remote script might look like:
tail command
shell
of course you would need to add whatever parts will be necessary for your shell or coding style.
An alternate approach would be to keep the current remote command being terminated and add another ssh call for the case of being interrupted that is spanning the shell for interactive use. But in that case, also `CTRL-C will not be available for terminating the minoring altogether.
To achieve this you might try changing active interrupt handler with your monitoring script to trigger termination as soon as the remote side returns. However, this will cause a race condition between the user being able to recognize remote command terminated (and control has been returned to local script) and the proper interrupt handler being in place. You might be able to sufficiently lower that risk be first activating the new trap handler and then echoing the fact and maybe add a sleep to allow the user to react.
Not really sure what you are saying.
Also, you should disable PasswordAuthentication in /etc/ssh/sshd_config and log by adding the public key of your home computer to `~/.ssh/authorized_keys
! /bin/sh
while [ true ];
do
RESPONSE=`ssh -i /home/user/.ssh/id_host user#$IP 'tail /home/user/log.txt'`
echo $RESPONSE
sleep 10
done

DIG returns IP value consistently when run from shell, not from bash script

I run this script every 15 minutes to record the IP of four dynamic hosts I have setup at duckdns.org.
DUCKDNS=( "sub1.duckdns.org"
"sub2.duckdns.org"
"sub3.duckdns.org"
"sub4.duckdns.org" )
function resolveIP () {
CURRENT_IP=$(dig +short $1)
echo $CURRENT_IP
}
for HOST in "${DUCKDNS[#]}"
do
IP=$(resolveIP $HOST)
echo -e "$HOST ... $IP"
done
When I run the script from the shell, I receive an IP value every time. When I run the script via cron, the script returns null values sporadically.
I believe the issue is related to a 2(SERVFAIL) error I noticed when running host (not dig). This is the response from host:
host sub1.duckdns.org
sub1.duckdns.org has address 66.66.66.66 (NOTE: This value is correct)
Host sub1.duckdns.org not found: 2(SERVFAIL)
sub1.duckdns.org mail is handled by 50 sub1.duckdns.org.
Though an error is returned, the IP address is correct and the script returns it. What is troubling is sometimes is it's not consistent. The script extracts the value when run from the shell every time and most of the time when run via cron.
After setting up my subdomains with DuckDNS.org, I've read that it may not have been a wise choice. Perhaps it's just an issue with the DuckDNS service? If so, any recommendations for other dynamic DNS providers?
I feel like I'm living a bash version of the double slit experiment. Any thoughts on what could be causing this null IP behavior?

how to send different commands to multiple hosts to run programs in Linux

I am an R user. I always run programs on multiple computers of campus. For example, I need to run 10 different programs. I need to open PuTTY 10 times to log into the 10 different computers. And submit each of programs to each of 10 computers (their OS is Linux). Is there a way to log in 10 different computers and send them command at same time? I use following command to submit program
nohup Rscript L_1_cc.R > L_1_sh.txt
nohup Rscript L_2_cc.R > L_2_sh.txt
nohup Rscript L_3_cc.R > L_3_sh.txt
First set up ssh so that you can login without entering a password (google for that if you don't know how). Then write a script to ssh to each remote host to run the command. Below is an example.
#!/bin/bash
host_list="host1 host2 host3 host4 host5 host6 host7 host8 host9 host10"
for h in $host_list
do
case $h in
host1)
ssh $h nohup Rscript L_1_cc.R > L_1_sh.txt
;;
host2)
ssh $h nohup Rscript L_2_cc.R > L_2_sh.txt
;;
esac
done
This is a very simplistic example. You can do much better than this (for example, you can put the ".R" and the ".txt" file names into a variable and use that rather than explicitly listing every option in the case).
Based on your topic tags I am assuming you are using ssh to log into the remote machines. Hopefully the machine you are using is *nix based so you can use the following script. If you are on Windows consider cygwin.
First, read this article to set up public key authentication on each remote target: http://www.cyberciti.biz/tips/ssh-public-key-based-authentication-how-to.html
This will prevent ssh from prompting you to input a password each time you log into every target. You can then script the command execution on each target with something like the following:
#!/bin/bash
#kill script if we throw an error code during execution
set -e
#define hosts
hosts=( 127.0.0.1 127.0.0.1 127.0.0.1)
#define associated user names for each host
users=( joe bob steve )
#counter to track iteration for correct user name
j=0
#iterate through each host and ssh into each with user#host combo
for i in ${hosts[*]}
do
#modify ssh command string as necessary to get your script to execute properly
#you could even add commands to transfer the file into which you seem to be dumping your results
ssh ${users[$j]}#$i 'nohup Rscript L_1_cc.R > L_1_sh.txt'
let "j=j+1"
done
#exit no error
exit 0
If you set up the public key authentication, you should just have to execute your script to make every remote host do their thing. You could even look into loading the users/hosts data from file to avoid having to hard code that information into the arrays.

RSH: Running out of ports

I have an issue where I am running out of ports when using RSH to start a script remotely.
I have a script that i need to run that has been pushed out to every server.
I have a list of servers (hostfilelist)
Basically, I have a simple loop that will run them in paralell.
for host in `cat hostfilelist`; do
rsh $host ksh script.ksh &
done
Problem is there are like 2k servers and I am hitting a limit of 512, (assuming port range is 512-1023 for RSH based on documents i have read).
How can i get around this?
With your code you would not only run into the "secure" port limitation of rsh, but you may also hit a file descriptor limit (check with ulimit -n); each network connection consumes a file descriptor as well.
What you are doing with your code is to go through the hostfilelist and for each host run an rsh command which is put in the background (on the source server) with the ampersand. Each of these connections is kept open in the background until the script on the remote host finishes.
You are much better off in this situation to put the execution of the script in the background on each remote host, so your rsh command comes back immediately after starting the remote job and thus frees up the network connection (and port) again. To do so, rewrite the second line in your code to
rsh $host "ksh script.ksh &"
However, you may still run into issues with port reuse (see TIME_WAIT status on netstat output) if things happen too fast.
And I'd strongly recommend to let go of rsh and use ssh instead.

Resources