How to let command continue run even though ssh connection lost? [duplicate] - linux

This question already has answers here:
How to prevent a background process from being stopped after closing SSH client in Linux [closed]
(20 answers)
Closed 5 years ago.
It's a time cost heavily job when compile something on server. But, my VPS ssh connection is unstable. Connection will be lost in about 10 minutes. How can I let my command continue run when ssh connection lost ?

If you have an unstable connection, the screen(1) command is your best solution. This keeps the terminal session alive when you get disconnected and allows you to log back in a reconnect with it, preserving the screen state and whatever else you had running. It may not be installed by default on your linux distribution (its not on Ubuntu), but is available in any package system.
There are useful tutorials in various places -- a web search for linux screen command gives many pointers.

A quick Google search pulled up the following 3 SO posts that should help:
Linux: Prevent a background process from being stopped after closing SSH client
Getting ssh to execute a command in the background on target machine
Use SSH to start a background process on a remote server, and exit session
TL;DR - use nohup

Related

Keep Go script running on Linux server after ssh connection is closed [duplicate]

This question already has answers here:
How to prevent a background process from being stopped after closing SSH client in Linux [closed]
(20 answers)
How to make a program continue to run after log out from ssh? [duplicate]
(6 answers)
Closed 1 year ago.
I have a Go script that creates an HTTP request and responds to any incoming requests. I want this script to run on a Linux server. However, I can't keep the ssh connection active forever, and when it disconnects the script also stops, as expected. I know that you have 'forever' for NodeJS that fixes this issue. I was wondering if something similar also exists for Golang, as I was not able to find anything. Any other tips on how to fix this are also very welcome.
There are mainly three options here, one is to use the nohup command, the other is to use the screen command, and the last is the upgraded version of byobu of screen. After reading these three commands, I actually prefer to use the byobu command, because the byobu command is more powerful, an upgraded version of screen, and the interface is more friendly.

Reconnecting to a console output after system restart

For a running script on a Linux VM with a regular console output: If I disconnect from the VM the output window disappears. If I restart the VM, the script is still running but how do I get back to the output-screen?
Easy solution: use GNU screen, or an alternative like tmux to run your scripts in a persistent session. Thus, if you accidentally disconnect from your SSH session (or must shut down your computer), you can still reattach to the screen session later.
Tutorial: Using GNU Screen to Manage Persistent Terminal Sessions
Another great feature is that screen can also log the console output to a file. I use it all the time for cron jobs or other unattended tasks. I also use screen for updates (using yum, dnf or whatever), because updates can take a lot of time, and sometimes may even have to restart the network service, which would terminate your SSH session.

why does node.js process terminates after ssh session?

I am deploying my node.js program on a remote machine (Ubuntu 14.04) and the program terminates after ssh session even if I deploy it as background process.
node app.js &
I understand that using forever can solve this, which I have tried and it pretty much works. There is already a thread here that describes the good solutions to it, and there are many other threads all describes good tools and solutions for it.
But, I would like to understand why does the node.js process stops at
first place even if it runs as background process?
Since you are connecting through SSH all the processes belong to that session.
Unless specified by a command like
nohup
or no hang up for short, all the processes that belong to your session through SSH will die with the session.
It's like logging in with a user opening chrome and logging out. The chrome thread will be release once the owner was logged out.

keep operation running on ec2-server

On my windows machine, I connect with Putty over SSH to an ec2 Linux instance and perform a download through the command line from a public web resource. When my internet connection has a hick-up the connection of my Putty terminal to the server gets lost.
My question relates to why the download operation does not continue to run on the ec2-server independent of me having a terminal window open. How can I make sure that the operation continues to run on the instance when I close my connection to it ?
There are already couple similar questions, but the answers seem not applicable to my situation.

Trying to persist process through SSH is failing [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a long-running python program that I'm trying to run on a remote server.
I've looked at "How to keep processes running after ending ssh session?", "
How to start process via SSH, so it keeps running?", "Run a persistent process via ssh", and a few other topics, but they don't seem to help.
I've tried running the python process with screen (via detaching a screen containing a background process) and nohup, but in both cases, when I exit the ssh session (which--I'm not sure if this matters--is run with X11 forwarding, since the python program is creating some graphics), the ssh session hangs.
The ssh process hangs even if I redirect stdin, stdout, stdout from/to /dev/null.
Killing the ssh session kills the python process. When I kill the ssh, the following error message is printed on the remote server: g_dbus_connection_real_closed: Remote peer vanished with error: Underlying GIOStream returned 0 bytes on an async read (g-io-error-quark, 0). Exiting.
Furthermore, I don't actually want to redirect stdout or stderr to /dev/null, since I want to redirect them to a log file. So I didn't try running the python process as a daemon. (Perhaps it's bad that the logging is sent to stdout, I guess...)
What should I do, so that I can: (1) keep my process running after logging out, (2) redirect stdout/stderr to a log file?
(One thing which "worked" was suspending and then rerunning the ssh process [after it hangs] in the background, but what if I want to shut off my computer?)
The X11 connection is indeed the problem. Screen takes care of keeping stdin/stdout/stderr connected, and it also protects the program from the HUP signal. However, it does not keep a virtual X server for it to write graphics on.
So the question is: what do you want with the graphics? If your program really needs to output them, you need to set up a (virtual) X server which it can continue to reach even after the connection is lost. You can connect to this virtual server with vnc, just like you can connect to your screen session.
Or you can make the program more suitable for running in the background, which means it must not use the X server. In that case, you probably want to output some files which you can then turn into graphics with a separate program when you want to see them.
I thought sshd create new session leader bash for it's connection, so if you put your programmer at background and redirect your stdout/stderr ( >log 2>&1 ) , then even if you lose the connection ,the running bash will control your program.

Categories

Resources