vagrant ssh -c and keeping a background process running after connection closed - linux

I am writing a script to start and background a process inside a vagrant machine. It seems like every time the script ends and the ssh session ends, the background process also ends.
Here's the command I am running:
vagrant ssh -c "cd /vagrant/src; nohup python hello.py > hello.out > 2>&1 &"
hello.py is actually just a flask development server. If I were to login to ssh interactively and run the nohup command manually, after I close the session, the server will continue to run. However, if I were to run it via vagrant ssh -c, it's almost as if the command never ran at all (i.e. no hello.out file created). What is the difference between running it manually and through vagrant ssh -c, and how to fix it so that it works?

I faced the same problem when trying to run Django application as a daemon. I don't know why, but adding a "sleep 1" behind works for me.
vagrant ssh -c "nohup python manage.py runserver & sleep 1"

Running nohup inside the ssh command did not work for me when running wireshark. This did:
nohup vagrant ssh -c "wireshark" &

Related

nohup disconnects after ssh connection drops

I am building a rest API with java, when the endpoint gets called. I will spawn a virtual shell with RunTime.getRuntime().exec(My_script.sh)
My script then proceeds to ssh into a server, create some files, and most importantly... I needs to run a script that sits on that server to process the files.
I run the command with
nohup myscript.sh > /dev/null/ 2>&1 &
I also added sleep 5 because i read that nohup may take a second to get things started
The script runs until the shell disconnects and dies.
So as of now
java application launches a shell
the bash script initiates a ssh connection and creates a subshell that runs a secondary script
while in the subshell, with our second script, we create some files and then we call nohup script to start the script
subshell exits and terminates
main shell exits and terminates
script we called disconnects and never finishes running
Some things I have tried
I have tried calling the script from the ssh connection,
psudo example
ssh user#host "nohup script.sh > /dev/null/ 2>&1 &"
also tried with different quotes
ssh user#host "nohup script.sh > /dev/null/" 2>&1 &
Another thing I can potentially do is keep the initial shell open. So that the script wont die when it disconnects... because it never will
I have looked into to other options like screen but I don't think it will be useful if the machine that initiated the script gets completely terminated after it starts
Things to note
I don't have control over the Linux box I ssh into, so I cannot install packages on that machine. I can however install packages on my shell

Why my bot.py code using selenoid isn't running on ubuntu server when I close my ssh bash window?

I have set up on local machine an ubuntu server on which I run bot.py code using ssh bash terminal. My bot.py get's url from my contacts and visits webpages using docker and selenoid. I have set up docker and selenoid and they work well. When I run:
$ sudo ./myscript_ro_run_bot.sh
[inside myscript_ro_run_bot.sh]:
#!/bin/bash
while true
do
echo "running bot.py"
nohup sudo python3 bot.py # nohup to run at background
wait
echo "bot.py finished"
echo "running bot1.py"
nohup sudo python3 bot1.py
wait
echo "bot1.py finished"
.....
echo "runnning bot5.py"
nohup sudo python3 bot5.py
sleep 10m
done
(I have 5 bot.py files)
I can see on local machine messages in Telegram that (myscript_ro_run_bot.sh) doing it's job well that sites have been visited and I get rewarded. Even on local machine the (myscript_ro_run_bot.sh) can be ran 24/7 hours (indefinitely). But I want to run on server 24/7 hours. The problem is when I close ssh bash window manager I see on local machine Telegram that nothing happening, I don't get messages. Here is the trick when I connect to my server with ssh again after 5 or an hour and only after reconnection I start receiving messages in telegram. I can see job running in server with command:
$ htop
that my command sudo python3 bot.py is running
When I used:
$ sudo crontab -e
#reboot /home/user/myscript_ro_run_bot.sh >> /home/user/myscrit_to_run_bot.log
After reboot I connected to server with ssh and got result from myscrit_to_run_bot.log:
running bot.py
bot.py finished
running bot1.py
bot1.py finished
running bot3.py
But I didn't get any messages in telegram after reconnection.
Whereas I run my script manually and reconnect to server I get messages in telegram.
Can anybody help me how to solve the issue? I want sudo ./myscript_ro_run_bot.sh to run even I close ssh bash terminal.
If you want me to provide more details please write commands as well (detail instruction) because I am new on coding and linux.
I appreciate your help
Try to use screen or tmux for launching your process https://wiki.archlinux.org/title/Tmux
run tmux
run ./your_program
enter ctrl+b and then d
After this, your process will be run in the background, and you can close ssh-connection.
When you need back, you can run tmux attach

Executing SSH with the Linux/Unix at command

I place this ssh call in the following a shell script on our Linux box named "tstz" and then call it with the linux "at" command in order to schedule it for later execution.
tstz script:
#! /bin/ksh
/usr/bin/ssh -tt <remote windows server> pmcmds ${fl} ${wf} < /dev/null >/tmp/test1.log 2>&1
at command syntax:
at -f tstz now + 1 minute
The ssh call executes remote command as expected, but the ssh connection closes immediately before the remote command has completed. I need the connection to stay open until the remote command has completed and then return control to the tstz script with an exit status.
This is the error I get in the /tmp/test1.log:
tcgetattr: Inappropriate ioctl for device
^[[2JConnection to dc01nj2dwifdv02.nj.core.him closed.^M
NOTE: When using the "at" command to schedule tstz, if I don't use -tt, the ssh command will not execute the remoted command "pmcmds ${fl} ${wf}". I believe this is because a terminal is required. I can however run tstz from the Linux command prompt in the foreground without the -tt on the ssh command line and it runs as expected.
Any help would be greately appreciated. Thanks!
As I understand you need to specify a command to execute on the REMOTE machine after successfully connecting to the server, not on LOCAL machine.
I use following command
ssh -i "key.pem" ec2-user#ec2-XX-XX-XX-XX.us-west-2.compute.amazonaws.com -t 'command; bash -l -c "sudo su"'
where you should replace "sudo su" with your own command, I guess with "pmcmds DFD_ETIME wf_TESTa"
So, try to issue, maybe
/usr/bin/ssh -tt <remote windows server> 'command; bash -l -c "pmcmds DFD_ETIME wf_TESTa"'
P.S. I have discovered interesting service on google called "explainshell"
which helped me to understand that "command;" keyword is crucial inside quotes.

node.js unavailable via ssh

I am trying to call an installation of node.js on a remote server running Ubuntu via SSH. Node has been installed via nvm.
SSHing in and calling node works just fine:
user#localmachine:~$ ssh user#remoteserver
(Server welcome text)
user#remoteserver:~$ which node
/home/user/.nvm/v0.10.00/bin/node
However if I combine it into one line:
user#localmachine:~$ ssh user#remoteserver "which ls"
/bin/ls
user#localmachine:~$ ssh user#remoteserver "which node"
No sign of node, so I tried sourcing .bashrc and waiting 10 seconds:
user#localmachine:~$ ssh user#remoteserver "source ~/.bashrc; sleep 10; which node"
Only node seems affected by this. One thing I did notice was that if I ssh in and then check which shell I'm in it says -bash whilst if I ssh direct it gives me /bin/bash. I tried running the commands inside a bash login shell:
user#localmachine:~$ ssh user#remoteserver 'bash --login -c "which node"'
Still nothing.
Basically my question is: Why isn't bash finding my node.js installation when I call it non-interactively from SSH?
Another approach is to run bash in interactive mode with the -i flag:
user#localmachine:~$ ssh user#remoteserver "bash -i -c 'which node'"
/home/user/.nvm/v0.10.00/bin/node
$ ssh user#remoteserver "which node"
When you run ssh and specify a command to be run on the remote system, ssh by default doesn't allocate a PTY (pseudo-TTY) for the session. Not having a TTY causes your remote shell process (ie, bash) to initialize as a non-interactive session instead of an interactive session. This can alter how it interprets your initialization files--.bashrc, .bash_profile, and so on.
The actual problem is probably that the line which adds /home/user/.nvm/v0.10.00/bin to your command PATH isn't executing for non-interactive sessions. There are two ways to resolve this:
Find the command in your initialization file(s) which adds /home/user/.nvm/v0.10.00/bin to your command path, figure out why it's not running for non-interactive sessions, and correct it.
Run ssh with the -t option. This tells it to allocate a PTY for the remote session. Or add the line RequestTTY yes to your .ssh/config file on the local host.

Use SSH to start a background process on a remote server, and exit session

I am using SSH to start a background process on a remote server. This is what I have at the moment:
ssh remote_user#server.com "nohup process &"
This works, in that the process does start. But the SSH session itself does not end until I hit Ctr-C.
When I hit Ctr-C, the remote process continues to run in the background.
I would like to place the ssh command in a script that I can run locally, so I would like the ssh session to exit automatically once the remote process has started.
Is there a way to make this happen?
The "-f" option to ssh tells ssh to run the remote command in the background and to return immediately. E.g.,
ssh -f user#host "echo foo; sleep 5; echo bar"
If you type the above, you will get your shell prompt back immediately, you will then see "foo" output. Five seconds later you will then see "bar" output. In the meantime, you could have been using the shell.
When using nohup, make sure you also redirect stdin, stdout and stderr:
ssh user#server 'DISPLAY=:0 nohup xeyes < /dev/null > std.out 2> std.err &'
In this way you will be completely detached from the remote process. Be carefull with using ssh -f user#host... since that will only put the ssh process in the background on the calling side. You can verify this by running a ps -aux | grep ssh on the calling machine and this will show you that the ssh call is still active, but just put in the background.
In my example above I use DISPLAY=:0 since xeyes is an X11 program and I want it started on the remote machine.
You could use screen to run your process on this screen, detach from screen Ctrl-a :detach and exit your current session without problem. Then you can reconnect to SSH and attach to this screen again to continue with your task or check if is finished.
Or you can send the command to an already running screen. Your local script should look like this:
ssh remote_user#server.com
screen -dmS new_screen sh
screen -S new_screen -p 0 -X stuff $'nohup process \n'
exit
For more info see this tutorial
Well this question is almost 10 years old, but I recently had to launch a very long script (taking several hours to complete) on a remote server and I found a way using the crontab.
If can edit your user's crontab on the remote server, connect with ssh to the server, edit the crontab and add an entry that will start your script the next minute. Let's say it's 15h03. Add this line :
4 15 * * * /path/to/your/script.sh
save your crontab, wait a minute for the script to be launched. Then edit again your crontab to remove this entry.
You can then safely exit ssh, even shut down your computer while the script is running.

Resources