Best way to communicate between RPi and Ubuntu Server [closed] - linux

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
So I'm currently working on an automatic CO / Smoke detector that uses a RasPi as its controller. I want it to send out SMS alerts when a detection event happens. I would like it to work like this.
Sensor(s) > RPi > Main Server > SMS via sendmail
I am wondering how to keep a stable link between the RPi and my Server. I was thinking about using SSH and Ping along with Supervisord to keep everything working, however I also want to be able to test whether the script is working remotely with a command from the main server.
Does SSH and Ping seem like a good way of going about this? And if so, how can I ensure that the script on the RPi hasn't crashed, even if the device is responding to Pings?
Thanks
Edit: It doesn't need to be realtime communication, I was thinking of having it checked every 10s

I was suggesting you use something like this as your script on the Raspberry Pi.
#!/usr/bin/env bash
SVIP=192.168.0.9 # Server IP address
PORT=30000
################################################################################
# This function is called whenever the script exits or dies
################################################################################
function cleanup(){
echo DEBUG: EXIT code running...
# Kill the alive function we started in the background
kill -9 $alivePID
}
trap cleanup EXIT
################################################################################
# This function runs continuously in the background answering "pings"
################################################################################
function alive(){
echo DEBUG: Listening for pings on port $PORT
while :; do
echo "$RANDOM" | nc -l $PORT
echo DEBUG: Ouch.
done
}
################################################################################
# main function of script
################################################################################
# Start the ping responder in the background and note its PID so we can kill it
alive &
alivePID=$!
# Do whatever you want to do here - e.g. read temperatures
sleep 20
Start reading around 4 lines from the bottom. It starts a function called alive() in the background and gets its PID so it can be stopped on exit. The alive() function listens for "pings" using netcat and replies to each one with a random number.
If the script is stopped, it kills the alive() function and so it will no longer reply to pings.
On your server, you can then "ping" the Raspberry Pi by typing this command (replace RASPI_IP with the IP address of your Raspberry Pi):
echo "PING" | nc <RASPI_IP> 30000 ; echo $?
and you will get a random number and an exit code of 0 if the service is running on the Raspberry Pi, whereas you will get an exit code of 1 and no reply if the service on the Raspberry Pi is not running.

Related

A bash shell program to wait for all servers in a list to start, then run startup scripts

I'm trying to automate the startup (after reboot) of an application that runs on a grid of 12 Linux 7 servers.
The 12 servers get rebooted in random order.
All the servers need to be running before I can start up the application that resides on the 12 servers.
What I'd like to do is to test that all 12 Linux 7 servers are up and then when all 12 are up, I'd proceed with the startup sequence of commands.
All 12 server are set up with ssh keys.
cat serverlist.txt
10.0.0.6
10.0.0.7
10.0.0.8
10.0.0.9
...
10.0.0.18
I want to ping a server and then wait until the ping is successful, then move to the next IP address.
My apologies for my question.
How to code this?
Read in the first line from IP list file.
Ping first IP until success, then ping the other IP addresses (one at a time) until the success of all 12.
Then, run commands to start the application on the grid of 12 servers.
Question: How to code this in the bash shell.
The inner loop can be as simple as
while ! ssh "${connection_string}" -o ConnectTimeout=5 true
do
sleep 0.5
done
This runs a trivial command, and waits for 0.5 seconds between retries.
for i in `cat /home/Startup/serverlist.txt`
do
ssh ${i} -o ConnectTimeout=5 true
while test $? -gt 0
do
ssh ${i} -o ConnectTimeout=5 true
done
done
exit
If ssh works the server is alive, plus the use of the timeout is a better way to speed up the script execution. If the script get finish that means all the servers are up and responding.

Linux script for probing ssh connection in a loop and start log command after connect

I have a host machine that gets rebooted or reconnected quite a few times.
I want to have a script running on my dev machine that continuously tries to log into that machine and if successful runs a specific command (tailing the log data).
Edit: To clarify, the connection needs to stay open. The log command keeps tailing until I stop it manually.
What I have so far
#!/bin/bash
IP=192.168.178.1
if (("$#" >= 1))
then
IP=$1
fi
LOOP=1
trap 'echo "stopping"; LOOP=0' INT
while (( $LOOP==1 ))
do
if ping -c1 $IP
then
echo "Host $IP reached"
sshpass -p 'password' ssh -o ConnectTimeout=10 -q user#$IP '<command would go here>'
else
echo "Host $IP unreachable"
fi
sleep 1
done
The LOOP flag is not really used. The script is ended via CTRL-C.
Now this works if I do NOT add a command to be executed after the ssh and instead start the log output manually. On a disconnect the script keeps probing the connection and logs back in once the host is available again.
Also when I disconnect from the host (CTRL-D) the script will log right back into the host if CTRL-C is not pressed fast enough.
When I add a command to be executed after ssh the loop is broken. So pressing (CTRL-C) does not only stop the log but also disconnects and ends the script on the dev machine.
I guess I have to spawn another shell somewhere or something like that?
1) I want the script to keep probing, log in and run a command completely automatically and fall back to probing when the connection breaks.
2) I want to be able to stop the log on the host (CTRL-C) and thereby fall back to a logged in ssh connection to use it manually.
How do I fix this?
Maybe best approach on "fixing" would be fixing requirements.
The problematic part is number "2)".
The problem is from how SIGINT works.
When triggered, it is sent to the current control group related to your terminal. Mostly this is the shell and any process started from there. With more modern shells (you seem to use bash), the shell manages control groups such that programs started in the background are disconnected (by having been assigned a different control group).
In your case the ssh is started in the foreground (from a script executed in the foreground), so it will receive the interrupt, forward it to the remote and terminate as soon as the remote end terminated. As by that time the script shell has processed its signal handler (specified by trap) it is going to exit the loop and terminate itself.
So, as you can see, you have overloaded CTRL-C to mean two things:
terminate the monitoring script
terminate the remote command and continue with whatever is specified for the remote side.
You might get closer to what you want if you drop the first effect (or at least make it more explicit). Then, calling a script on the remote side that does not terminate itself but just the tail command, will be step. In that case you will likely need to use -t switch on ssh to get a terminal allocated for allowing normal shell operation later.
This, will not allow for terminating the remote side with just CTRL-C. You always will need to exit the remote shell that is going to be run.
The essence of such a remote script might look like:
tail command
shell
of course you would need to add whatever parts will be necessary for your shell or coding style.
An alternate approach would be to keep the current remote command being terminated and add another ssh call for the case of being interrupted that is spanning the shell for interactive use. But in that case, also `CTRL-C will not be available for terminating the minoring altogether.
To achieve this you might try changing active interrupt handler with your monitoring script to trigger termination as soon as the remote side returns. However, this will cause a race condition between the user being able to recognize remote command terminated (and control has been returned to local script) and the proper interrupt handler being in place. You might be able to sufficiently lower that risk be first activating the new trap handler and then echoing the fact and maybe add a sleep to allow the user to react.
Not really sure what you are saying.
Also, you should disable PasswordAuthentication in /etc/ssh/sshd_config and log by adding the public key of your home computer to `~/.ssh/authorized_keys
! /bin/sh
while [ true ];
do
RESPONSE=`ssh -i /home/user/.ssh/id_host user#$IP 'tail /home/user/log.txt'`
echo $RESPONSE
sleep 10
done

Bash script - How to run ssh after another one is connected

I don't have a powerful hardware so I can't run multiple ssh tunnels at the same time or it'll make the CPU load go way too high, my goal is to run a ssh tunnel after another one is connected, and reconnect if one of my ssh gets disconnected, so basically it's like this:
while true; do
if (1st ssh isn't connected); then
connect the first ssh
elif (1st ssh is finally connected); then
run the second ssh
elif (2nd ssh is finally connected); then
run the 3rd ssh
fi
sleep 1
done
The problem is that the amount of ssh tunnels keeps changing, sometimes a user wants to run 3 ssh tunnels and sometimes 5, it looks like this to run the script:
mytunnel.sh -a [number of tunnels they wanna run]
I'm thinking of for loop but I just can't figure out how to write it inside a for loop. Please help me.
Here is a for loop you can use:
#!/usr/local/bin/bash
LOOP=$1
for (( c=1; c<=$LOOP; c++ ))
do
echo "$c "
done
Replace echo with your commands and LOOP with whatever command-line arg you'll be using. This example reads command-line arg 1 (i.e. $1).
Example execution:
Tricky. Unfortunately I don't think ssh returns anything when it connects a tunnel, nor does it exit immediately when the connection is broken.
Instead what you probably want to do is make a port monitor that periodically checks that the port is accepting connections and spawns a new ssh tunnel (possibly killing the old ssh process) if it isn't.

Write a bash script to restart a daemon [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I thought I could just use this related question: How Do I write a bash script to restart a process if it dies. #lhunath had a great answer and told me everything I might do about it was wrong, but I'm restarting a daemon process and if I'm hoping there's something I can do in a single script that works.
my process starts with a kick off script that shows the startup log, but then quits and leaves the process running off the shell:
>sudo ./start
R CMD Rserve --RS-conf /var/FastRWeb/code/rserve.conf --vanilla --no-save
...
Loading required package: FastRWeb
FastRWeb: TRUE
Loading data...
Rserv started in daemon mode.
>
The process is up and running,
ps -ale | grep Rserve
1 S 33 16534 1 0 80 0 - 60022 poll_s ? 00:00:00 Rserve
Is there a simple way to wrap or call the 'start' script from bash and restart when the process dies or is this a case where PID files are actually called for?
Dang - question got closed even after pointing to a very similar question that was not closed on stackoverflow. you guys suck
A very simple way to monitor the program is to use cron: check every minute (or so) if the program still is alive, ./start it otherwise.
As root, invoke crontab -e.
Append a line like this:
* * * * * if ! pidof Rserve 2>&1 >/dev/null; then /full/path/to/start; fi
This method will stay persistent, i.e., it will be executed after a reboot etc. If this is not what you want, move it to a shell script:
#! /bin/bash
# monitor.sh
while true; do
if ! pidof Rserve 2>&1 >/dev/null; then /full/path/to/start; fi
sleep 10
done
This script has to be started manually from the command line, and can be easily stopped with Ctrl-C.
The easiest solution, if you can run the process is NON-daemon mode, is to wrap it in a script.
#!/bin/bash
while (true)
do
xmessage "This is your process. Click OK to kill and respawn"
done
Edit
Many deamons leave a lock file, usually in /var/lock, that contains their PID. This keeps multiple copies of the deamon from running.
Under Linux, it is fairly simple to look throgh /proc and see if that process is still around.
Under other platforms you may need to play games with ps to check for the processes existence.

How to know from a bash script if the user abruptly closes ssh session

I have a bash script that acts as the default shell for a user loging in trough ssh.
It provides a menu with several options one of wich is sending a file using netcat.
The netcat of the embedded linux I'm using lacks the -w option, so if the user closes the ssh connection without ever sending the file, the netcat command waits forever.
I need to know if the user abruptly closes the connection so the script can kill the netcat command and exit gracefully.
Things I've tried so far:
Trapping the SIGHUP: it is not issued. The only signal issued i could find is SIGCONT, but I don't think it's reliable and portable.
Playing with the -t option of the read command to detect a closed stdin: this would work if not for a silly bug in the embedded read command (only times out on the first invocation)
Edit:
I'll try to answer the questions in the comments and explain the situation further.
The code I have is:
nc -l -p 7576 > /dev/null 2>> $LOGFILE < $TMP_DIR/$BACKUP_FILE &
wait
I'm ignoring SIGINT and SIGTSTP, but I've tried to trap all the signals and the only one received is SIGCONT.
Reading the bash man page I've found out that the SIGHUP should be sent to both script and netcat and that the SIGCONT is sent to stopped jobs to ensure they receive the SIGHUP.
I guess the wait makes the script count as stopped and so it receives the SIGCONT but at the same time the wait somehow eats up the SIGHUP.
So I've tried changing the wait for a sleep and then both SIGHUP and SIGCONT are received.
The question is: why is the wait blocking the SIGHUP?
Edit 2: Solved
I solved it polling for a closed stdin with the read builtin using the -t option. To work around the bug in the read builtin I spawn it in a new bash (bash -c "read -t 3 dummy").
Does the Parent PiD change? If so you could look up the parent in the process list and make sure the process name is correct.
I have written similar applications. It would be helpful to have more of the code in your shell. I think there may be a way of writing your overall program differently which would address this issue.

Resources