I'm trying to get a script to run at startup, but does nothing if I've connected to my Raspberry Pi via SSH.
So far I've got the crontab to automatically run the script checkssh.sh via #reboot sleep 30 && sudo bash ./checkssh.sh and './checkssh.sh' contains this:
#!/bin/bash
if [ -n "$SSH_CLIENT" ] || [ -n "$SSH_TTY" ]; then
echo "SSH CONNECTED"
else
./autobackup.sh
fi
Running checkssh.sh from an SSH terminal returns 'SSH CONNECTED' which is expected, and letting it run automatically from the crontab at reboot when SSH isn't connected works correctly. However, when it runs at boot and I connect via SSH as soon as it's available, it still runs the script. I'm not sure where this is going wrong.
I need it to run automatically and if there's no SSH connection run autobackup.sh , but if there is an SSH connection, not to run anything. The device I use for the SSH connection may vary & the network used may also vary, so a script that relies on specific IP's isn't ideal.
Thanks for any help :)
Those environment variables (SSH_CLIENT and SSH_TTY) are only set in the environment of an SSH session. You cannot check them from another process and expect them to fulfill your goals here.
Instead, run the program finger. This is the standard way to see who is logged in.
Probably you need to add some delay before running your script to allow for the SSH service to come up. If cron service comes up before the sshd does, you will have a failure. Try:
#reboot sleep 60 && bash ./checkssh.sh
Also I would substitute the '.' with the full script path.In one scenario I had to add as many as 120 seconds to get the #reboot crontab to work right. But ssh should not need as much. I guess you can trim 60 seconds according to your needs after you get it working.
Related
I have a script that monitors a specific server, giving me the disk usage, CPU usage, etc. I am using 2 Ubuntu VMs: I run the script on the server using SSH (ssh user#ip < script.sh from the first VM), and I want to make it show values in real time, so I tried 2 approaches I found on here:
1. while loop with clear
The first approach is using a while loop with "clear" to make the script run multiple times, giving new values every time and clearing the previous output like so:
while true
do
clear;
# bunch of code
done
The problem here is that it doesn't clear the terminal, it just keeps printing the new results one after another.
2. watch
The second approach uses watch:
watch -n 1 Script.sh
This works fine on the local machine (to monitor the current machine where the script is), but I can't find a way to make it run via SSH. Something like
ssh user#ip < 'watch -n 1 script.sh'
works in principle, but requires that the script be present on the server, which I want to avoid. Is there any way to run watch for the remote execution (via SSH) of a script that is present on the local machine?
For your second approach (using watch), what you can do instead is to run watch locally (from within the first VM) with an SSH command and piped-in script like this:
watch -n 1 'ssh user#ip < script.sh'
The drawback of this is that it will reconnect in each watch iteration (i.e., once a second), which some server configurations might not allow. See here for how to let SSH re-use the same connection for serial ssh runs.
But if what you want to do is to monitor servers, what I really recommend is to use a monitoring system like 'telegraf'.
I have this easy script for checking connection of specific interface with one condition.
If interface exists do nothing, if not dial VPN
#!/bin/bash
FOUND=`grep "tap0" /proc/net/dev`
if [ -n "$FOUND" ] ; then
echo yes
else
openvpn /home/pi/vpn/data/bezpasaka.opvn > /home/pi/vpn/errors.txt
fi
When I execute that it is working.
I also needed to add in into crontab so
*/1 * * * * /home/pi/vpn/checkvpn.sh
It supposed to work, I tried different bash with mkdir only and it was working fine.
Cron execute checkvpn.sh (i can see that in processes using :
ps ax | grep 'checkvpn.sh'
The problem I think is, that after cron execute checkvpn.sh it dials VPN and it gooooes all to the end until it cancelled whole connection because he can't hold the session probably. Atleast that what I though after checking errors.txt file
Have you any idea about any solution?
openvpn /home/pi/vpn/data/bezpasaka.opvn > /home/pi/vpn/errors.txt
where does openvpn located? when the cron get executed it does not normally loads PATH variables and hence does not know how to execute openvpn. You need to give full path of openvpn like /usr/bin/openvpn in order for this to work.
also please check whether openvpn requires tty or not
My intention is to cycle through my list of ips and poweroff if my ping succeeds first. However the systems seems to hang. After running this script I can't ping the systems anymore and they aren't powered off. If I run ssh 192.168.1.ip "sudo poweroff" through terminal I dont encounter this issue. Any advice?
for ((ip=40, cnt=0; ip<=max; ip++, cnt++))
do
if ping -c 1 192.168.1.$ip &> /dev/null
then
printf "\n${array[$cnt]}: Ping Successful"
ssh 192.168.1.$ip "sudo poweroff"
printf "\n${array[$cnt]}: Power Down Executed\n"
sleep 1
else
printf "\n${array[$cnt]}: Ping Failed\n"
fi
done
After running a single ssh 192.168.1.40 "sudo poweroff" the system is properly powered off. When running my script, I am unable to ping the systems however I can visually see the fans and leds are still on. I think I should use a KVM to take a closer look since ssh doesn't allow allow connection after this script is run. Still at first glance I dont understand how running ssh 192.168.1.40 "sudo poweroff" and running it through my script really makes a difference. Anyways I'll try to add more information tomorrow.
ssh 192.168.1.$ip "(sleep 5; sudo poweroff)&" to put the process in the background on the remote host and sleep for 5 seconds before powering off to give time for the script to complete and exit the remote host before it goes down... – David C. Rankin
This Resolved my Issue.
I would like to make a shutdown-script for my raspberry pi to shut down anothe raspberry pi over ssh.
The script works if it is running itself but at the shutdown routine the ssh command is not executed.
So that I have done until now:
Made the script in /etc/init.d:
#!/bin/sh
# the first thing is to test if the shutdown script is working
echo "bla bla bla " | sudo tee -a /test.txt
ssh pi#10.0.0.98 sudo shutdown -h now
Made it executable
sudo chmod +x /etc/init.d/raspi.sh
Made a symlink to the rc0.d
sudo ln -s /etc/init.d/raspi.sh /etc/rc0.d/S01raspi.sh
Now I know so far that the shutdown script is working outside of the shutdown routing by calling itself and the shutdown symlink I made is also working partially because I see the changes in the test.txt file every time I shut down.
Can anyone help me how to solve my problem?
Have you tried with single quotes?
The first link in Google has it
http://malcontentcomics.com/systemsboy/2006/07/send-remote-commands-via-ssh.html
What about the sudo, how do you solve entering the password?
https://superuser.com/questions/117870/ssh-execute-sudo-command
Please check this or other links on the web that have useful information.
I would have send all this in a comment but I cant yet because of reputation.
I have now got the script running by myself. I do not really know why it is now working but I write it down beneath and maybe someone else can clearifiy it.
I don´t think the first two changes at my system makes a difference but I also write it down. In the meanwhile because I do not managed the script to get working I had made a button to shutdown the system manually. Also I made a script which backs the mysql-database up (which is on the Raspberry Pi which I would like to switch off with the script) and copies the backup to the raspberry pi which should switch of the other raspberry automatically via the shutdown-script. This happens with scp and also for the password is a key generated.
I have also changed my script to get a log-message out of the script.
#!/bin/sh
ssh -t -t pi#10.0.0.99 'sudo shutdown -h now' >> /home/osmc/shutdown.log 2>&1
To get it into the shutdown-routine I used:
sudo update-rc.d raspi-b stop 01 0
I hope somebody can say me why my code now worked on the first day but not on the next few days until now.
I structured a command to suspend or shutdown a remote host over ssh. You may find this useful. This may be used to suspend / shutdown a remote computer without an interactive session and yet not keep a terminal busy. You will need to give permissions to the remote user to shutdown / suspend using sudo without a password. Additionally, the local and remote machines should be set up to SSH without an interactive login. The script is more useful for suspending the machine as a suspended machine will not disconnect the terminal.
local_user#hostname:~$ ssh remote_user#remote_host "screen -d -m sudo pm-suspend"
source: कार्यशाला (Kāryaśālā)
I am using SSH to start a background process on a remote server. This is what I have at the moment:
ssh remote_user#server.com "nohup process &"
This works, in that the process does start. But the SSH session itself does not end until I hit Ctr-C.
When I hit Ctr-C, the remote process continues to run in the background.
I would like to place the ssh command in a script that I can run locally, so I would like the ssh session to exit automatically once the remote process has started.
Is there a way to make this happen?
The "-f" option to ssh tells ssh to run the remote command in the background and to return immediately. E.g.,
ssh -f user#host "echo foo; sleep 5; echo bar"
If you type the above, you will get your shell prompt back immediately, you will then see "foo" output. Five seconds later you will then see "bar" output. In the meantime, you could have been using the shell.
When using nohup, make sure you also redirect stdin, stdout and stderr:
ssh user#server 'DISPLAY=:0 nohup xeyes < /dev/null > std.out 2> std.err &'
In this way you will be completely detached from the remote process. Be carefull with using ssh -f user#host... since that will only put the ssh process in the background on the calling side. You can verify this by running a ps -aux | grep ssh on the calling machine and this will show you that the ssh call is still active, but just put in the background.
In my example above I use DISPLAY=:0 since xeyes is an X11 program and I want it started on the remote machine.
You could use screen to run your process on this screen, detach from screen Ctrl-a :detach and exit your current session without problem. Then you can reconnect to SSH and attach to this screen again to continue with your task or check if is finished.
Or you can send the command to an already running screen. Your local script should look like this:
ssh remote_user#server.com
screen -dmS new_screen sh
screen -S new_screen -p 0 -X stuff $'nohup process \n'
exit
For more info see this tutorial
Well this question is almost 10 years old, but I recently had to launch a very long script (taking several hours to complete) on a remote server and I found a way using the crontab.
If can edit your user's crontab on the remote server, connect with ssh to the server, edit the crontab and add an entry that will start your script the next minute. Let's say it's 15h03. Add this line :
4 15 * * * /path/to/your/script.sh
save your crontab, wait a minute for the script to be launched. Then edit again your crontab to remove this entry.
You can then safely exit ssh, even shut down your computer while the script is running.