nohup disconnects after ssh connection drops - linux

I am building a rest API with java, when the endpoint gets called. I will spawn a virtual shell with RunTime.getRuntime().exec(My_script.sh)
My script then proceeds to ssh into a server, create some files, and most importantly... I needs to run a script that sits on that server to process the files.
I run the command with
nohup myscript.sh > /dev/null/ 2>&1 &
I also added sleep 5 because i read that nohup may take a second to get things started
The script runs until the shell disconnects and dies.
So as of now
java application launches a shell
the bash script initiates a ssh connection and creates a subshell that runs a secondary script
while in the subshell, with our second script, we create some files and then we call nohup script to start the script
subshell exits and terminates
main shell exits and terminates
script we called disconnects and never finishes running
Some things I have tried
I have tried calling the script from the ssh connection,
psudo example
ssh user#host "nohup script.sh > /dev/null/ 2>&1 &"
also tried with different quotes
ssh user#host "nohup script.sh > /dev/null/" 2>&1 &
Another thing I can potentially do is keep the initial shell open. So that the script wont die when it disconnects... because it never will
I have looked into to other options like screen but I don't think it will be useful if the machine that initiated the script gets completely terminated after it starts
Things to note
I don't have control over the Linux box I ssh into, so I cannot install packages on that machine. I can however install packages on my shell

Related

Bash script how to run a command remotely and then exit the remote terminal

I'm trying to execute the command:
ssh nvidia#ubuntu-ip-address "/opt/ads2/arm-linux64/bin/ads2 svcd&"
This works so far except that it hangs in the remote terminal when "/opt/ads2/arm-linux64/bin/ads2 svcd&" is executed, unless i enter ctrl+c. So I'm looking for a command that, after executing the command, exits from the remote terminal and continue executing the local bash script.
thanks in advance
When you run a command in background on a terminal, regardless of weather it be local or remotely, if you attempt to logout most systems will warn you have running jobs. One further attempt to logout and your jobs get killed as you exit.
In order to avoid this you need to detach your running jobs from terminal.
if job is already running you can
disown -h <jobspec ar reported by jobs>
If you want to run something in background and then exit leaving it running you can use nohup
nohup command &
This is certainly ok on init systems ... not sure if it works exactly like this on systems that use systemd.

rclone mount volume automatically via bashrc on startup

I am using rclone to mount a folder from my cloudstorage on my local computers. however, on one machine I only connect via terminal and I want to mount the volume on startup.
So I setup a small shell-script with following contents:
rclone mount remoterep:/examplefolder ~/Documents/examplefolder
and I call it in bashrc with exec ~/mount_examplefolder
when I ssh into said computer, it is working as I do not get any errors but the shell refuses to take any further commands as the mount command is executed.
If I add another ssh login, I get an error-prompt, because it can't overwrite the mount folder from the other session.
So how do I fix this, that the rclone is being executed in the background giving me access to shell back?
Or am I restricted to mounting it manually and then using another ssh session to perform the desired actions?
There's a couple of things here that are causing some problems.
First, when you use exec to spawn a process in the shell, that means that you want to replace the existing shell process with the program you've mentioned. When you do that in an SSH session, you replace the shell process that the SSH daemon started (and you were intending to use to log in). SSH will then wait for that process to exit (which it won't until the volume is umounted), which is why you see the hang. You'll want to skip the exec in your shell configuration, which will spawn the process without replacing your shell.
Second, the reason you see the error is that the mount process is designed to be run once, as you've noticed. If you want to skip mounting the folder if it's already mounted, you can use something like the following as your shell script:
#!/bin/sh
if ! grep " $HOME/Documents/examplefolder " /proc/mounts
then
rclone mount remoterep:/examplefolder ~/Documents/examplefolder
fi
Note the spaces inside the quotes that ensure that you haven't matched something else by accident. This will ensure that your script doesn't try to mount multiple times.
Third, you'll probably want to run this command in the background and detached from the shell so that the exit of the shell doesn't cause it to receive SIGHUP and exit (or restart, depending on how it's configured). You can do this by writing the invocation in your shell configuration as nohup ~/mount_examplefolder >/dev/null 2>&1 &. nohup prevents the program from receiving SIGHUP and redirecting output prevents it from printing messages or creating nohup.out files all over the place.
Finally, you may (or may not) want to configure this to run only when you're using an interactive shell; that is, when you're logging in to start a shell for interactive use rather than scripting use. If so, you can make the invocation of nohup condition on PS1 being set like so:
if [ -n "$PS1" ]
then
nohup ~/mount_examplefolder >/dev/null 2>&1 &
fi

How to execute a local bash script on remote server via ssh with nohup

I can run a local script on a remote server using the -s option, like so:
# run a local script without nohup...
ssh $SSH_USER#$SSH_HOST "bash -s" < myLocalScript.sh;
And I can run a remote script using nohup, like so:
# run a script on server with nohup...
ssh $SSH_USER#$SSH_HOST "nohup bash myRemoteScript.sh > results.out 2>&1 &"
But can I run my local script with nohup on the remote server? I expect the script to take many hours to complete so I need something like nohup. I know I can copy the script to the server and execute it but then I have to make sure I delete it once the script is complete, would rather not have to do that if possible.
I've tried the following but it's not working:
# run a local script without nohup...
ssh $SSH_USER#$SSH_HOST "nohup bash -s > results.out 2>&1 &" < myLocalScript.sh;
You shouldn't have to do anything special - Once you kick off a script on another machine, it should finish running even if you terminate the connection:
For example
ssh $SSH_USER#$SSH_HOST "bash -s > results.out 2>&1" < myLocalScript.sh &
# Please wait a few seconds for the connection to be established
kill $! # Optional: Kill the last process
If you want to test it, try a simple script like this
# myLocalScript.sh
echo 'File created - sleeping'
sleep 30
echo 'Finally done!'
The results.out file should immediately be created on the other machine with "File created - sleeping" in it. You can actually kill the local ssh process with kill <your_pid>, and it will still keep running on the other machine, and after 30 seconds, print "Finally done!" into the file, and exit.

how to prevent binary run from a sudo script from responding to SIGHUP?

I have a script that executes a sudo command that runs a script that executes a binary (java). That binary seems to receive SIGHUP when the the login session ends (I run my script in an SSH session from my laptop, so network disconnects when I commute). The SIGHUP causes the binary process to abort.
I tried calling the sudo command with 'nohup' and trapping SIGHUP, with no success. I can't change the sudo command (e.g. 'sudo nohup' instead of 'nohup sudo'), or the script I execute with it (due to security constraints).
What other options exist? I know I can do a double fork, but I want the binary to run in the foreground while the session is live (it is running a query, so when the session is alive, I want to be able to wait for it to finish interactively and see results, but if I hangup, to have it continue to run)

Use SSH to start a background process on a remote server, and exit session

I am using SSH to start a background process on a remote server. This is what I have at the moment:
ssh remote_user#server.com "nohup process &"
This works, in that the process does start. But the SSH session itself does not end until I hit Ctr-C.
When I hit Ctr-C, the remote process continues to run in the background.
I would like to place the ssh command in a script that I can run locally, so I would like the ssh session to exit automatically once the remote process has started.
Is there a way to make this happen?
The "-f" option to ssh tells ssh to run the remote command in the background and to return immediately. E.g.,
ssh -f user#host "echo foo; sleep 5; echo bar"
If you type the above, you will get your shell prompt back immediately, you will then see "foo" output. Five seconds later you will then see "bar" output. In the meantime, you could have been using the shell.
When using nohup, make sure you also redirect stdin, stdout and stderr:
ssh user#server 'DISPLAY=:0 nohup xeyes < /dev/null > std.out 2> std.err &'
In this way you will be completely detached from the remote process. Be carefull with using ssh -f user#host... since that will only put the ssh process in the background on the calling side. You can verify this by running a ps -aux | grep ssh on the calling machine and this will show you that the ssh call is still active, but just put in the background.
In my example above I use DISPLAY=:0 since xeyes is an X11 program and I want it started on the remote machine.
You could use screen to run your process on this screen, detach from screen Ctrl-a :detach and exit your current session without problem. Then you can reconnect to SSH and attach to this screen again to continue with your task or check if is finished.
Or you can send the command to an already running screen. Your local script should look like this:
ssh remote_user#server.com
screen -dmS new_screen sh
screen -S new_screen -p 0 -X stuff $'nohup process \n'
exit
For more info see this tutorial
Well this question is almost 10 years old, but I recently had to launch a very long script (taking several hours to complete) on a remote server and I found a way using the crontab.
If can edit your user's crontab on the remote server, connect with ssh to the server, edit the crontab and add an entry that will start your script the next minute. Let's say it's 15h03. Add this line :
4 15 * * * /path/to/your/script.sh
save your crontab, wait a minute for the script to be launched. Then edit again your crontab to remove this entry.
You can then safely exit ssh, even shut down your computer while the script is running.

Resources