Bash poweroff script hangs system - linux

My intention is to cycle through my list of ips and poweroff if my ping succeeds first. However the systems seems to hang. After running this script I can't ping the systems anymore and they aren't powered off. If I run ssh 192.168.1.ip "sudo poweroff" through terminal I dont encounter this issue. Any advice?
for ((ip=40, cnt=0; ip<=max; ip++, cnt++))
do
if ping -c 1 192.168.1.$ip &> /dev/null
then
printf "\n${array[$cnt]}: Ping Successful"
ssh 192.168.1.$ip "sudo poweroff"
printf "\n${array[$cnt]}: Power Down Executed\n"
sleep 1
else
printf "\n${array[$cnt]}: Ping Failed\n"
fi
done
After running a single ssh 192.168.1.40 "sudo poweroff" the system is properly powered off. When running my script, I am unable to ping the systems however I can visually see the fans and leds are still on. I think I should use a KVM to take a closer look since ssh doesn't allow allow connection after this script is run. Still at first glance I dont understand how running ssh 192.168.1.40 "sudo poweroff" and running it through my script really makes a difference. Anyways I'll try to add more information tomorrow.

ssh 192.168.1.$ip "(sleep 5; sudo poweroff)&" to put the process in the background on the remote host and sleep for 5 seconds before powering off to give time for the script to complete and exit the remote host before it goes down... – David C. Rankin
This Resolved my Issue.

Related

Run a script if no SSH connection

I'm trying to get a script to run at startup, but does nothing if I've connected to my Raspberry Pi via SSH.
So far I've got the crontab to automatically run the script checkssh.sh via #reboot sleep 30 && sudo bash ./checkssh.sh and './checkssh.sh' contains this:
#!/bin/bash
if [ -n "$SSH_CLIENT" ] || [ -n "$SSH_TTY" ]; then
echo "SSH CONNECTED"
else
./autobackup.sh
fi
Running checkssh.sh from an SSH terminal returns 'SSH CONNECTED' which is expected, and letting it run automatically from the crontab at reboot when SSH isn't connected works correctly. However, when it runs at boot and I connect via SSH as soon as it's available, it still runs the script. I'm not sure where this is going wrong.
I need it to run automatically and if there's no SSH connection run autobackup.sh , but if there is an SSH connection, not to run anything. The device I use for the SSH connection may vary & the network used may also vary, so a script that relies on specific IP's isn't ideal.
Thanks for any help :)
Those environment variables (SSH_CLIENT and SSH_TTY) are only set in the environment of an SSH session. You cannot check them from another process and expect them to fulfill your goals here.
Instead, run the program finger. This is the standard way to see who is logged in.
Probably you need to add some delay before running your script to allow for the SSH service to come up. If cron service comes up before the sshd does, you will have a failure. Try:
#reboot sleep 60 && bash ./checkssh.sh
Also I would substitute the '.' with the full script path.In one scenario I had to add as many as 120 seconds to get the #reboot crontab to work right. But ssh should not need as much. I guess you can trim 60 seconds according to your needs after you get it working.

Wifi disconnected before init.d script is run

I've set up a simple init.d script "S3logrotate" to run on shutdown. The "S3logrotate" script works fine when run manually from command line but the script does not function correctly on shut down.
The script uploads logs from my PC to an Amazon S3 bucket and requires wifi to run correctly.
Debugging proved that the script is actually run but the upload process fails.
I found that the problem seems to be that the script seems to run after wifi is terminated.
These are the blocks I used to test my internet connection in the script.
if ping -q -c 1 -W 1 8.8.8.8 >/dev/null; then
echo "IPv4 is up" >> *x.txt*
else
echo "IPv4 is down" >> *x.txt*
fi
if ping -q -c 1 -W 1 google.com >/dev/null; then
echo "The network is up" >> *x.txt*
else
echo "The network is down" >> *x.txt*
fi
The output for this block is:
IPv4 is down
The network is down
Is there any way to set the priority of an init.d script? As in, can I make my script run before the network connection is terminated? If not, is there any alternative to init.d?
I use Ubuntu 16.04 and have dual booted with Windows 10 if that's significant.
Thanks,
sganesan7
You should place you scrip in:
/etc/NetworkManager/dispatcher.d/pre-down.d
change group and owner to root
chown root:root S3logrotate
and it should work. If you need to do this for separate interface place script in
create a script inside
/etc/NetworkManager/dispatcher.d/
and name it (for example):
wlan0-down
and should work too.

Bash script - How to run ssh after another one is connected

I don't have a powerful hardware so I can't run multiple ssh tunnels at the same time or it'll make the CPU load go way too high, my goal is to run a ssh tunnel after another one is connected, and reconnect if one of my ssh gets disconnected, so basically it's like this:
while true; do
if (1st ssh isn't connected); then
connect the first ssh
elif (1st ssh is finally connected); then
run the second ssh
elif (2nd ssh is finally connected); then
run the 3rd ssh
fi
sleep 1
done
The problem is that the amount of ssh tunnels keeps changing, sometimes a user wants to run 3 ssh tunnels and sometimes 5, it looks like this to run the script:
mytunnel.sh -a [number of tunnels they wanna run]
I'm thinking of for loop but I just can't figure out how to write it inside a for loop. Please help me.
Here is a for loop you can use:
#!/usr/local/bin/bash
LOOP=$1
for (( c=1; c<=$LOOP; c++ ))
do
echo "$c "
done
Replace echo with your commands and LOOP with whatever command-line arg you'll be using. This example reads command-line arg 1 (i.e. $1).
Example execution:
Tricky. Unfortunately I don't think ssh returns anything when it connects a tunnel, nor does it exit immediately when the connection is broken.
Instead what you probably want to do is make a port monitor that periodically checks that the port is accepting connections and spawns a new ssh tunnel (possibly killing the old ssh process) if it isn't.

Linux - shutdown-script with SSH

I would like to make a shutdown-script for my raspberry pi to shut down anothe raspberry pi over ssh.
The script works if it is running itself but at the shutdown routine the ssh command is not executed.
So that I have done until now:
Made the script in /etc/init.d:
#!/bin/sh
# the first thing is to test if the shutdown script is working
echo "bla bla bla " | sudo tee -a /test.txt
ssh pi#10.0.0.98 sudo shutdown -h now
Made it executable
sudo chmod +x /etc/init.d/raspi.sh
Made a symlink to the rc0.d
sudo ln -s /etc/init.d/raspi.sh /etc/rc0.d/S01raspi.sh
Now I know so far that the shutdown script is working outside of the shutdown routing by calling itself and the shutdown symlink I made is also working partially because I see the changes in the test.txt file every time I shut down.
Can anyone help me how to solve my problem?
Have you tried with single quotes?
The first link in Google has it
http://malcontentcomics.com/systemsboy/2006/07/send-remote-commands-via-ssh.html
What about the sudo, how do you solve entering the password?
https://superuser.com/questions/117870/ssh-execute-sudo-command
Please check this or other links on the web that have useful information.
I would have send all this in a comment but I cant yet because of reputation.
I have now got the script running by myself. I do not really know why it is now working but I write it down beneath and maybe someone else can clearifiy it.
I don´t think the first two changes at my system makes a difference but I also write it down. In the meanwhile because I do not managed the script to get working I had made a button to shutdown the system manually. Also I made a script which backs the mysql-database up (which is on the Raspberry Pi which I would like to switch off with the script) and copies the backup to the raspberry pi which should switch of the other raspberry automatically via the shutdown-script. This happens with scp and also for the password is a key generated.
I have also changed my script to get a log-message out of the script.
#!/bin/sh
ssh -t -t pi#10.0.0.99 'sudo shutdown -h now' >> /home/osmc/shutdown.log 2>&1
To get it into the shutdown-routine I used:
sudo update-rc.d raspi-b stop 01 0
I hope somebody can say me why my code now worked on the first day but not on the next few days until now.
I structured a command to suspend or shutdown a remote host over ssh. You may find this useful. This may be used to suspend / shutdown a remote computer without an interactive session and yet not keep a terminal busy. You will need to give permissions to the remote user to shutdown / suspend using sudo without a password. Additionally, the local and remote machines should be set up to SSH without an interactive login. The script is more useful for suspending the machine as a suspended machine will not disconnect the terminal.
local_user#hostname:~$ ssh remote_user#remote_host "screen -d -m sudo pm-suspend"
source: कार्यशाला (Kāryaśālā)

SSH: guarding stdout against disconnect

My server deployment script triggers a long-running process through SSH, like so:
ssh host 'install.sh'
Since my internet connection at home is not the best, I can sometimes be disconnected while the install.sh is running. (This is easily simulated by closing the terminal window.) I would really like for the install.sh script to keep running in those cases, so that I don't end up with interrupted apt-get processes and similar nuisances.
The reason why install.sh gets killed seems to be that stdout and stderr are closed when the SSH session is yanked, so writing to them fails. (It's not an issue of SIGHUP, by the way -- using nohup makes no difference.) If I put touch ~/1 && echo this fails && touch ~/2 into install.sh, only ~/1 is created.
So running ssh host 'install.sh &> install.out' solves the problem, but then I lose any "live" progress and error output.
So my question is: What's an easy/idiomatic way to run a process through SSH so that it doesn't crash if SSH dies, but so that I can still see the output as it runs?
Solutions I have tried:
When I run things manually, I use screen for cases like this, but I don't think it will be of much help here because I need to run install.sh automatically from a shell script. Screen seems to be made for interactive use (it complains "Must be connected to a terminal.").
Using install.sh 2>&1 | tee install.out didn't help either (silly of me to think it might).
You can redirect stdout/stderr into install.out and then tail -f it. The following snippet actually works:
touch install.out && # so tail does not bark (race condition)
(install.sh < /dev/null &> install.out &
tail --pid "$!" -F install.out)
But surely there must a less awkward way to do the same thing?
Try using screen:
screen ./install.sh
If your ssh session gets interrupted, you can simply reattach to the session via another ssh connection:
screen -x
You can provide a terminal to your ssh session using the -t switch:
ssh -t server screen ./install.sh
install.sh 2>&1 | tee install.out
if the only issue is not getting stderr. You didn't say exactly why the tee wasn't acceptable. You may need the other nohup/stdin tweaks.

Resources