Background shell script can't reach directories after ssh logout, even with nohup - linux

I want to run a shell script in the background on a server machine and starts that shell script from an ssh connection. Even though I run the background process script with nohup, the background script fails due to an directory unreachable error as soon as I close my ssh connection (and no sooner).
runInBackground.sh:
#!/bin/bash
...
nohup ./run.sh > /dev/null 2> local/errorLog.txt < /dev/null &
run.sh:
#!/bin/bash
...
while [ true ] ; do
...
cd optaplanner-examples
mvn exec:exec // calls java process
cd ..
done
So when I run runInBackground.sh, everything works fine for hours, until I disconnect my ssh connection.
As soon as I log out, the errorlog.txt fills up with:
java.io.FileNotFoundException: /home/myUser/server/optaplanner-simple-benchmark-daemon/local/output/
./run.sh: line 64: /home/myUser/server/optaplanner-simple-benchmark-daemon/local/processed/failed_machineReassignmentBenchmarkConfig.xml: No such file or directory
fatal: Could not change back to '(unreachable)/server/optaplanner-simple-benchmark-daemon/local/optaplannerGitClone/optaplanner': No such file or directory
ls: cannot access /home/myUser/server/optaplanner-simple-benchmark-daemon/local/input: No such file or directory
ls: cannot access /home/myUser/server/optaplanner-simple-benchmark-daemon/local/input: No such file or directory
ls: cannot access /home/myUser/server/optaplanner-simple-benchmark-daemon/local/input: No such file or directory
... // 1000+ more of that ls error
(Full source code)

well, it's not necessarily an encrypted home directory, but likely it's an auto-mounted home directory (e.g over NFS or so). It's mounted upon session startup, and unmounted upon exit. An encrypted home dir is only one of the possible reasons to use such a technique.
The main question is what's the rule that determines whether a user needs home dir or not. I would expect that it could be an allocated pty. You could test if it's actually true by starting a non-interactive SSH session w/o a pseudo-terminal: ssh -T user#host ls /home/myUser/server. I could expect that in this case you won't get a proper directory listing.
Then I would use a program like screen to prolongate interactive session lifetime beyond SSH session limits.
The server might use some other mechanism to provide the home directory for interactive SSH sessions. E.g. monitor interactive sessions listed in utmp In this case you will need a program that would keep the record as long as you need for your service. Perhaps you could use an automatically re-established SSH session. For example I use the following systemd unit to automatically keep a ssh tunnel from one of my workstations in different private networks:
[Unit]
Description=A tunnel to SOME_HOST
PartOf=sshd.service
Requires=network.service
[Service]
ExecStart=/usr/bin/ssh -N -q -R 2222:localhost:22 SOME_HOST
Restart=on-failure
RestartSec=5
User=tunnel
Group=tunnel
[Install]
WantedBy=sshd.service
WantedBy=network.service
When a failure occurs, systemd automatically restarts the unit and SSH session is re-established.

I always use the screen utility to run my scripts instead of nohup.
With screen your process will keep running in even your current ssh session times out or gets disconnected.
Use as follows -
apt-get install screen (On Debian based Systems)
OR
yum install screen (On RedHat based Systems)
To run your application and check the output live (provided your script file does not start a background process and it outputs to the stdout and/or stderr
cd your_app_directory_path
screen ./your_script.sh
Once you are done and want to leave (without stopping the process), use CTRL + A + D to detach the screen.
To check your processes which are run using the screen utility -
screen -r
to reattach a running process
screen -r <screen id or name>
Hope this was useful.

One workaround is to use screen to keep the ssh session open. You can use screen -r to reconnect to the session if you get disconnected.

Related

rclone mount volume automatically via bashrc on startup

I am using rclone to mount a folder from my cloudstorage on my local computers. however, on one machine I only connect via terminal and I want to mount the volume on startup.
So I setup a small shell-script with following contents:
rclone mount remoterep:/examplefolder ~/Documents/examplefolder
and I call it in bashrc with exec ~/mount_examplefolder
when I ssh into said computer, it is working as I do not get any errors but the shell refuses to take any further commands as the mount command is executed.
If I add another ssh login, I get an error-prompt, because it can't overwrite the mount folder from the other session.
So how do I fix this, that the rclone is being executed in the background giving me access to shell back?
Or am I restricted to mounting it manually and then using another ssh session to perform the desired actions?
There's a couple of things here that are causing some problems.
First, when you use exec to spawn a process in the shell, that means that you want to replace the existing shell process with the program you've mentioned. When you do that in an SSH session, you replace the shell process that the SSH daemon started (and you were intending to use to log in). SSH will then wait for that process to exit (which it won't until the volume is umounted), which is why you see the hang. You'll want to skip the exec in your shell configuration, which will spawn the process without replacing your shell.
Second, the reason you see the error is that the mount process is designed to be run once, as you've noticed. If you want to skip mounting the folder if it's already mounted, you can use something like the following as your shell script:
#!/bin/sh
if ! grep " $HOME/Documents/examplefolder " /proc/mounts
then
rclone mount remoterep:/examplefolder ~/Documents/examplefolder
fi
Note the spaces inside the quotes that ensure that you haven't matched something else by accident. This will ensure that your script doesn't try to mount multiple times.
Third, you'll probably want to run this command in the background and detached from the shell so that the exit of the shell doesn't cause it to receive SIGHUP and exit (or restart, depending on how it's configured). You can do this by writing the invocation in your shell configuration as nohup ~/mount_examplefolder >/dev/null 2>&1 &. nohup prevents the program from receiving SIGHUP and redirecting output prevents it from printing messages or creating nohup.out files all over the place.
Finally, you may (or may not) want to configure this to run only when you're using an interactive shell; that is, when you're logging in to start a shell for interactive use rather than scripting use. If so, you can make the invocation of nohup condition on PS1 being set like so:
if [ -n "$PS1" ]
then
nohup ~/mount_examplefolder >/dev/null 2>&1 &
fi

Linux - shutdown-script with SSH

I would like to make a shutdown-script for my raspberry pi to shut down anothe raspberry pi over ssh.
The script works if it is running itself but at the shutdown routine the ssh command is not executed.
So that I have done until now:
Made the script in /etc/init.d:
#!/bin/sh
# the first thing is to test if the shutdown script is working
echo "bla bla bla " | sudo tee -a /test.txt
ssh pi#10.0.0.98 sudo shutdown -h now
Made it executable
sudo chmod +x /etc/init.d/raspi.sh
Made a symlink to the rc0.d
sudo ln -s /etc/init.d/raspi.sh /etc/rc0.d/S01raspi.sh
Now I know so far that the shutdown script is working outside of the shutdown routing by calling itself and the shutdown symlink I made is also working partially because I see the changes in the test.txt file every time I shut down.
Can anyone help me how to solve my problem?
Have you tried with single quotes?
The first link in Google has it
http://malcontentcomics.com/systemsboy/2006/07/send-remote-commands-via-ssh.html
What about the sudo, how do you solve entering the password?
https://superuser.com/questions/117870/ssh-execute-sudo-command
Please check this or other links on the web that have useful information.
I would have send all this in a comment but I cant yet because of reputation.
I have now got the script running by myself. I do not really know why it is now working but I write it down beneath and maybe someone else can clearifiy it.
I don´t think the first two changes at my system makes a difference but I also write it down. In the meanwhile because I do not managed the script to get working I had made a button to shutdown the system manually. Also I made a script which backs the mysql-database up (which is on the Raspberry Pi which I would like to switch off with the script) and copies the backup to the raspberry pi which should switch of the other raspberry automatically via the shutdown-script. This happens with scp and also for the password is a key generated.
I have also changed my script to get a log-message out of the script.
#!/bin/sh
ssh -t -t pi#10.0.0.99 'sudo shutdown -h now' >> /home/osmc/shutdown.log 2>&1
To get it into the shutdown-routine I used:
sudo update-rc.d raspi-b stop 01 0
I hope somebody can say me why my code now worked on the first day but not on the next few days until now.
I structured a command to suspend or shutdown a remote host over ssh. You may find this useful. This may be used to suspend / shutdown a remote computer without an interactive session and yet not keep a terminal busy. You will need to give permissions to the remote user to shutdown / suspend using sudo without a password. Additionally, the local and remote machines should be set up to SSH without an interactive login. The script is more useful for suspending the machine as a suspended machine will not disconnect the terminal.
local_user#hostname:~$ ssh remote_user#remote_host "screen -d -m sudo pm-suspend"
source: कार्यशाला (Kāryaśālā)

No Sockets found: screen script called on startup

I have added a script called screen_tronserver.sh to the /etc/init.d/ directory. I have set the permissions so it is executable and called
update-rc.d /etc/init.d/screen_tronserver.sh defaults 100
to add the script to startup.
Inside the file is the following:
#!/bin/bash
echo "creating screen and starting tron server..."
screen -S tronserver -m /home/pi/programming/tronserver/a.out
The a.out file is a compiled program which runs a server I have created. Now when I call
screen -ls
It says no sockets found. However, I have tried connecting to the server and it is successful, indicating the screen/socket is there somewhere, running in the background.
So why can I not see this mystical socket!?
As Andrew is already alluding to, init scripts are run as root, and screens belong to different users. When you type screen -ls you will see the screens of the current user, not all screens on the system.
So yes, sudo screen -ls or sudo screen -r will do the trick.
Edit (for completeness and future reference): as pointed out by the OP, it is necessary to also add the -d flag to the screen command to detach right away in the init script, or else the screen will remain attached.

Execute script on remote host - output given in local host

I am trying to execute two scripts which are available as sh files on remote host having 755 permissions.
I try callling them from client host as below:
REMOTE_HOST="host1"
BOUNCE_SCRIPT="
/code/sys/${ENV}/comp/1/${ENV}/scripts/unix/stopScript.sh ${ENV};
/code/sys/${ENV}/comp/1/${ENV}/scripts/unix/startScript.sh ${ENV};
"
ssh ${REMOTE_HOST} "${BOUNCE_SCRIPT}"
Above lines are in a script on local host.
While running the script on local host, the first command on remote host i.e. stopScript.sh gets executed correctly. It kills the running process which it was inteded to kill w/o any error.
However output of second script i.e. startScript.sh gets printed to local host window but the process it intended to start does not start on remote host.
Can anyone please let me know?
Is the way executing script on remote host correct?
Should I see output of running script on remote host locally as well? i.e. on the window of local host?
Thanks
You could try the -n flag for ssh:
ssh -n $REMOTE_HOST "$BOUNCE_SCRIPT" >> $LOG
The man page has further information (http://unixhelp.ed.ac.uk/CGI/man-cgi?ssh+1). The following is a snippet:
-n Redirects stdin from /dev/null (actually, prevents reading from
stdin).
Prefacing your startScript.sh line with 'nohup' may help. Often times if you remotely execute commands they will die when your ssh session ends, nohup allows your process to live after the session has ended. It would be helpful to know if your process is starting at all or if it starts and then dies.
I think cyber-monk is right, you should launch the processes with nohup to create à new independent process. Look if your stop script is killing the right process (the new one included).

Use SSH to start a background process on a remote server, and exit session

I am using SSH to start a background process on a remote server. This is what I have at the moment:
ssh remote_user#server.com "nohup process &"
This works, in that the process does start. But the SSH session itself does not end until I hit Ctr-C.
When I hit Ctr-C, the remote process continues to run in the background.
I would like to place the ssh command in a script that I can run locally, so I would like the ssh session to exit automatically once the remote process has started.
Is there a way to make this happen?
The "-f" option to ssh tells ssh to run the remote command in the background and to return immediately. E.g.,
ssh -f user#host "echo foo; sleep 5; echo bar"
If you type the above, you will get your shell prompt back immediately, you will then see "foo" output. Five seconds later you will then see "bar" output. In the meantime, you could have been using the shell.
When using nohup, make sure you also redirect stdin, stdout and stderr:
ssh user#server 'DISPLAY=:0 nohup xeyes < /dev/null > std.out 2> std.err &'
In this way you will be completely detached from the remote process. Be carefull with using ssh -f user#host... since that will only put the ssh process in the background on the calling side. You can verify this by running a ps -aux | grep ssh on the calling machine and this will show you that the ssh call is still active, but just put in the background.
In my example above I use DISPLAY=:0 since xeyes is an X11 program and I want it started on the remote machine.
You could use screen to run your process on this screen, detach from screen Ctrl-a :detach and exit your current session without problem. Then you can reconnect to SSH and attach to this screen again to continue with your task or check if is finished.
Or you can send the command to an already running screen. Your local script should look like this:
ssh remote_user#server.com
screen -dmS new_screen sh
screen -S new_screen -p 0 -X stuff $'nohup process \n'
exit
For more info see this tutorial
Well this question is almost 10 years old, but I recently had to launch a very long script (taking several hours to complete) on a remote server and I found a way using the crontab.
If can edit your user's crontab on the remote server, connect with ssh to the server, edit the crontab and add an entry that will start your script the next minute. Let's say it's 15h03. Add this line :
4 15 * * * /path/to/your/script.sh
save your crontab, wait a minute for the script to be launched. Then edit again your crontab to remove this entry.
You can then safely exit ssh, even shut down your computer while the script is running.

Resources