Linux - shutdown-script with SSH - linux

I would like to make a shutdown-script for my raspberry pi to shut down anothe raspberry pi over ssh.
The script works if it is running itself but at the shutdown routine the ssh command is not executed.
So that I have done until now:
Made the script in /etc/init.d:
#!/bin/sh
# the first thing is to test if the shutdown script is working
echo "bla bla bla " | sudo tee -a /test.txt
ssh pi#10.0.0.98 sudo shutdown -h now
Made it executable
sudo chmod +x /etc/init.d/raspi.sh
Made a symlink to the rc0.d
sudo ln -s /etc/init.d/raspi.sh /etc/rc0.d/S01raspi.sh
Now I know so far that the shutdown script is working outside of the shutdown routing by calling itself and the shutdown symlink I made is also working partially because I see the changes in the test.txt file every time I shut down.
Can anyone help me how to solve my problem?

Have you tried with single quotes?
The first link in Google has it
http://malcontentcomics.com/systemsboy/2006/07/send-remote-commands-via-ssh.html
What about the sudo, how do you solve entering the password?
https://superuser.com/questions/117870/ssh-execute-sudo-command
Please check this or other links on the web that have useful information.
I would have send all this in a comment but I cant yet because of reputation.

I have now got the script running by myself. I do not really know why it is now working but I write it down beneath and maybe someone else can clearifiy it.
I don´t think the first two changes at my system makes a difference but I also write it down. In the meanwhile because I do not managed the script to get working I had made a button to shutdown the system manually. Also I made a script which backs the mysql-database up (which is on the Raspberry Pi which I would like to switch off with the script) and copies the backup to the raspberry pi which should switch of the other raspberry automatically via the shutdown-script. This happens with scp and also for the password is a key generated.
I have also changed my script to get a log-message out of the script.
#!/bin/sh
ssh -t -t pi#10.0.0.99 'sudo shutdown -h now' >> /home/osmc/shutdown.log 2>&1
To get it into the shutdown-routine I used:
sudo update-rc.d raspi-b stop 01 0
I hope somebody can say me why my code now worked on the first day but not on the next few days until now.

I structured a command to suspend or shutdown a remote host over ssh. You may find this useful. This may be used to suspend / shutdown a remote computer without an interactive session and yet not keep a terminal busy. You will need to give permissions to the remote user to shutdown / suspend using sudo without a password. Additionally, the local and remote machines should be set up to SSH without an interactive login. The script is more useful for suspending the machine as a suspended machine will not disconnect the terminal.
local_user#hostname:~$ ssh remote_user#remote_host "screen -d -m sudo pm-suspend"
source: कार्यशाला (Kāryaśālā)

Related

Is there a way to make crontab run a gnu screen session?

I have a discord bot running on a raspberry pi that i need to restart every day, I'm trying to do this through crontab but with no luck.
It manages to kill the active screen processes but never starts an instance, not that I can see with "screen -ls" and I can tell that it doesn't create one that I can't see as the bot itself does not come online.
Here is the script:
#!/bin/bash
sudo pkill screen
sleep 2
screen -dmS discordBot
sleep 2
screen -S "discordBot" -X stuff "node discordBot/NEWNEWNEWN\n"
Here is also the crontab:
0 0 * * * /bin/bash /home/admin/discordBot/script.sh
Is it possible to have crontab run a screen session? And if so how?
Previously I tried putting the screen command stright into cron but now I have it in a bash script instead.
If I run the script in the terminal it works perfectly, it’s just cron where it fails. Also replacing "screen" with the full path "/usr/bin/screen" does not change anything.
So the best way of doing it, without investigating the underlying problem would be to create a systemd service and putting a restart command into cron.
 
/etc/systemd/system/mydiscordbot.service:
[Unit]
Description=A very simple discord bot
[Service]
Type=simple
ExecStart=node /path/to/my/discordBot.js
[Install]
WantedBy=multi-user.target
Now you can run your bot with systemctl start mydiscordbot and can view your logs with journalctl --follow -u mydiscord bot
Now you only need to put
45 2 * * * systemctl restart discordbot
into root's crontab and you should be good to go.
You probably should also write the logs into a logfile, maybe /var/log/mydiscordbot.log, but how and if you do that is up to you.
OLD/WRONG ANSWER:
cron is run with a minimal environment, and depending on your os, /usr/bin/ is probably not in the $PATH var. screen is mostlikely located at /usr/bin/screen, so cron can't run it because it can't find the screen binary. try replacing screen in your script with /usr/bin/screen
But the other question here is: Why does your discord bot need to be restarted every day. I run multiple bots with 100k+ users, and they don't need to be restarted at all. Maybe you should open a new question with the error and some code snippets.
I don't know what os your rpi is running, but I had a similar issue earlier today trying to get vms on a server to launch a terminal and run a python script with crontab. I found a very easily solution to automatically restart it on crashes, and to run something simply in the background. Don't rely on crontab or an init service. If your rpi as an x server running, or anything that can be launched on session start, there is a simple shell solution. Make a bash script like
while :; do
/my/script/to/keep/running/here -foo -bar -baz >> /var/log/myscriptlog 2>&1
done
and then you would start it on .xprofile or some similar startup operation like
/path/to/launcher.sh &
to have it run the background. It will restart automatically everytime it closes if started in something like .xprofile, .xinitrc, or anything ran at startup.
Then maybe make a cronjob to restart the raspberry pi or whatever system is running the script but this script wil restart the service whenever it's closed anyway. Maybe you can put that cronjob on a script but I don't think it would launch the GUI.
Some other things you can do to launch a GUI terminal in my research with cronjob that you can try, though they didn't work for my situation on these custom linux vms, and that I read was a security risk to do this from a cronjob, is you can specify the display.
* * * * * DISPLAY=:0 /your/gui/stuff/here
but you would would need to make sure the user has the right permissions and it's considered an unsafe hack to even do this as far as I have read.
for my issue, I had to launch a terminal that stayed open, and then changed to the directory of a python script and ran the script, so that the local files in directory would be called in the python script. here is a rough example of the "launcher.sh" I called from the startup method this strange linux distro used lol.
#!/bin/sh
while :; do
/usr/bin/urxvt -e /bin/bash -c "cd /path/to/project && python loader.py"
done
Also check this source for process management
https://mywiki.wooledge.org/ProcessManagement

Issuing Command Via SSH Prompts for Password

I'm having an issue with a script used in a project I inherited that has little to no documentation, and am in the process of documenting everything. I'm trying to debug an issue with one line of a script that is executed on the host machine to call out to a LAN-attached Raspberry Pi with SSH to return some information about the Pi.
We already have working versions of this Raspberry Pi which can execute the script without issue, and I'm not sure what the difference is. When executed on the new one, it prompts for the root password on the Pi, but it has not done this on previous versions of the device. I assume it has something to do with the SSH configuration but I don't know enough about SSH to say what would be the cause.
The line in particular causing the issue is:
ssh -o StrictHostKeyChecking=no {host_name} uname -a &>/dev/null
rc=$? #gets the return value of the remote command so we can read the uname info
{host_name} of course is the actual host name it's connecting to, but I've left that part out for privacy reasons. The script is the same on both machines.
Both Pi devices are the same model and I'm having trouble narrowing down what could cause me to not be able to execute this command. Does anyone know what I need to configure in order to be able to execute this command on the Pi remotely?
Quick fix:
sshpass -p 'password' ssh -o StrictHostKeyChecking=no user#server
Detailed fix:
Most likely you would need to set up Async keys (public/Private) for proper passwordless login. Your command does not show you are using keys so I'm assuming you are not (e.g. -A or -i /path/to/key). Generally root user is blocked (I guess not your problem), I would set up another user for this or change sshd config. You could also Compare the sshd configurations between the Pi Boxes.
See: https://www.raspberrypi.org/documentation/remote-access/ssh/passwordless.md
Okay, so after some more digging around, I discovered that there was a separate .ssh directory under /root that contained an authorized_keys file. After copying this to the new Pi, it worked. I had been wondering all this time if there was a separate config folder for root, but I've never gone digging around /root, so I wasn't aware that it was there.

Run a script if no SSH connection

I'm trying to get a script to run at startup, but does nothing if I've connected to my Raspberry Pi via SSH.
So far I've got the crontab to automatically run the script checkssh.sh via #reboot sleep 30 && sudo bash ./checkssh.sh and './checkssh.sh' contains this:
#!/bin/bash
if [ -n "$SSH_CLIENT" ] || [ -n "$SSH_TTY" ]; then
echo "SSH CONNECTED"
else
./autobackup.sh
fi
Running checkssh.sh from an SSH terminal returns 'SSH CONNECTED' which is expected, and letting it run automatically from the crontab at reboot when SSH isn't connected works correctly. However, when it runs at boot and I connect via SSH as soon as it's available, it still runs the script. I'm not sure where this is going wrong.
I need it to run automatically and if there's no SSH connection run autobackup.sh , but if there is an SSH connection, not to run anything. The device I use for the SSH connection may vary & the network used may also vary, so a script that relies on specific IP's isn't ideal.
Thanks for any help :)
Those environment variables (SSH_CLIENT and SSH_TTY) are only set in the environment of an SSH session. You cannot check them from another process and expect them to fulfill your goals here.
Instead, run the program finger. This is the standard way to see who is logged in.
Probably you need to add some delay before running your script to allow for the SSH service to come up. If cron service comes up before the sshd does, you will have a failure. Try:
#reboot sleep 60 && bash ./checkssh.sh
Also I would substitute the '.' with the full script path.In one scenario I had to add as many as 120 seconds to get the #reboot crontab to work right. But ssh should not need as much. I guess you can trim 60 seconds according to your needs after you get it working.

How can I prevent a daemon started over SSH from terminating at logout?

EDIT this is fixed. See my answer below.
I have a headless server running transmission-daemon on Angstrom Linux. I am able to SSH into the machine and invoke transmission-daemon via this init script; however, the process terminates as soon as I log out.
The command issued in the script is:
start-stop-daemon --chuid transmission --start --pidfile /var/run/transmission-daemon.pid --make-pidfile --exec /usr/local/bin/transmission-daemon --background -- -f
After starting the daemon via # /etc/init.d/transmission-daemon start, I can verify using ps that the process is owned by the user transmission (which is not the user I am logging in as via SSH).
I've tried every variation of the above command that I am aware of, including:
With and without the --background option for start-stop-daemon
Appending > /dev/null 2>&1 & to the start-stop-daemon command (source -- although there seems to be mixed results in that thread as to whether this is the right approach)
Appending > /dev/null 2>&1 & </dev/null & (source)
Adding & to the end of the command
Using nohup
None of these seems to work -- the result is always the same: the process exits immediately after I close the SSH session.
What can/should I do to keep the daemon running after I disconnect the SSH session?
Have you tried using GNU Screen?
It allows you to keep your session open even if you disconnect (but not if you exit).
It's a simple case of:
apt-get install screen
or
yum install screen
Since I cannot leave comments yet :), here is a good site that explains some functions of Screen, http://www.tecmint.com/screen-command-examples-to-manage-linux-terminals/
I use screens all the time, to do exactly what you are talking about. You open a screen, in the terminal, do what you need to do, then you can log off and your process will still be running.
sudo loginctl enable-linger your_user
# This allows users who are not logged in to run long-running
# service after ssh session ends
This is now resolved. Here's the background: at some point prior to running into this problem, something happened to my $PATH (I don't recall what) and the location where transmission-daemon lived (/sbin) was removed. Under the mistaken impression that transmission-daemon was no longer present on the system, I installed again from an ipk. This is the state the system was in when I initially asked this question.
I don't know why it made a difference, but once I corrected my $PATH and started running transmission-daemon installed at /sbin, everything worked again. The daemon keeps running after I log out.

OpenSSH on Cygwin

I have a Linux box (Ubuntu Server 13.04) which needs to run a job on a Windows 7 box (with cygwin installed) under a specific user's account. I have set up a password-less login to access the Windows machine through openSSH.
The problem I face is the following: when I manually ssh into the Win7 machine and launch the job everything is fine. However, when I launch the job using ssh winuser#winmachine command, I end up connecting to the Windows machine under the privileged sshd user 'cyg_server':
$ whoami
linuxuser
$ ssh winuser#Win7
$ whoami
winuser
$ exit
$ ssh winuser#Win7 "whoami; exit"
cyg_server
>> This should be 'winuser' too.
Why could this be happening? I have tried running ssh-host-config again to no avail. I don't see what parameters might influence this in sshd_config either.
Any help is greatly appreciated!
I had similar issues when I was connecting to a Cygwin machine using SSH. I used to have no problems logging on until one day I noticed that my path wasn't set correctly. I spent ages recreating the configuration files with ssh-host-config only to find my answer in the man page for ssh:
If command is specified, it is executed on the remote host instead of
a login shell.
The problem was the alias I had used to connect to the machine had been changed to connect to a screen session automatically (screen -DR). That meant that if there wasn't already a screen session to attach to, screen was not being run as a child process of a user login shell and not inheriting any of the relevant user environment.
When you provide a command as an argument to ssh, the resulting command is run as a process started by cyg_server. Ensuring the SSH command is being run as part of a login shell should do what you want:
ssh winuser#Win7 "bash -l -c 'whoami; exit'"
Explanation (from the bash man page):
-c string If the -c option is present, then commands are read from string.
-l Make bash act as if it had been invoked as a login shell.

Resources