I'am using a EC2 instance to run a large job that I estimate to take approx 24 hours to complete. I get the same issue described here ssh broken pipe ec2
I followed the suggestion/solutions in the above post and in my ssh session shell I launched my python program by the following command:
nohup python myapplication.py > myprogram.out 2>myprogram.err
Once I did this the connection remained intact longer than if I didn't use the nohup but it eventually fails with broken pipe error and I'm back to square one. The process 'python myapplication.py' is terminated as a result.
Any ideas on what is happening and what I can do to prevent this from occuring?
You should try screen.
Install
Ubuntu:
apt-get install screen
CentOS:
yum install screen
Usage
Start a new screen session by
$> screen
List all screen sessions you had created
$>screen -ls
There is a screen on:
23340.pts-0.2yourserver (Detached)
1 Socket in /var/run/screen/S-root.
Next, restore your screen
$> screen -R 23340
$> screen -R <screen-id>
A simple solution is to send the process to the background by appending an ampersand & to your command:
nohup python myapplication.py > myprogram.out 2>myprogram.err &
The process will continue to run even if you close your SSH session. You can always check progress by grabbing the tail of your output files:
tail -n 20 myprogram.out
tail -n 20 myprogram.err
I actually ended up fixing this accidentally with a router configuration, allowing all ICMP packets. I allowed all ICMP packets to diagnose a strange issue with some websites loading slowly randomly, and I noticed none of my SSH terminals died anymore.
I'm using a Ubiquiti EdgeRouter 4, so I followed this guide here https://community.ubnt.com/t5/EdgeRouter/EdgeRouter-GUI-Tutorial-Allow-ICMP-ping/td-p/1495130
Of course you'll have to follow your own router's unique instructions to allow ICMP through the firewall.
Related
I have set up on local machine an ubuntu server on which I run bot.py code using ssh bash terminal. My bot.py get's url from my contacts and visits webpages using docker and selenoid. I have set up docker and selenoid and they work well. When I run:
$ sudo ./myscript_ro_run_bot.sh
[inside myscript_ro_run_bot.sh]:
#!/bin/bash
while true
do
echo "running bot.py"
nohup sudo python3 bot.py # nohup to run at background
wait
echo "bot.py finished"
echo "running bot1.py"
nohup sudo python3 bot1.py
wait
echo "bot1.py finished"
.....
echo "runnning bot5.py"
nohup sudo python3 bot5.py
sleep 10m
done
(I have 5 bot.py files)
I can see on local machine messages in Telegram that (myscript_ro_run_bot.sh) doing it's job well that sites have been visited and I get rewarded. Even on local machine the (myscript_ro_run_bot.sh) can be ran 24/7 hours (indefinitely). But I want to run on server 24/7 hours. The problem is when I close ssh bash window manager I see on local machine Telegram that nothing happening, I don't get messages. Here is the trick when I connect to my server with ssh again after 5 or an hour and only after reconnection I start receiving messages in telegram. I can see job running in server with command:
$ htop
that my command sudo python3 bot.py is running
When I used:
$ sudo crontab -e
#reboot /home/user/myscript_ro_run_bot.sh >> /home/user/myscrit_to_run_bot.log
After reboot I connected to server with ssh and got result from myscrit_to_run_bot.log:
running bot.py
bot.py finished
running bot1.py
bot1.py finished
running bot3.py
But I didn't get any messages in telegram after reconnection.
Whereas I run my script manually and reconnect to server I get messages in telegram.
Can anybody help me how to solve the issue? I want sudo ./myscript_ro_run_bot.sh to run even I close ssh bash terminal.
If you want me to provide more details please write commands as well (detail instruction) because I am new on coding and linux.
I appreciate your help
Try to use screen or tmux for launching your process https://wiki.archlinux.org/title/Tmux
run tmux
run ./your_program
enter ctrl+b and then d
After this, your process will be run in the background, and you can close ssh-connection.
When you need back, you can run tmux attach
I would like to make a shutdown-script for my raspberry pi to shut down anothe raspberry pi over ssh.
The script works if it is running itself but at the shutdown routine the ssh command is not executed.
So that I have done until now:
Made the script in /etc/init.d:
#!/bin/sh
# the first thing is to test if the shutdown script is working
echo "bla bla bla " | sudo tee -a /test.txt
ssh pi#10.0.0.98 sudo shutdown -h now
Made it executable
sudo chmod +x /etc/init.d/raspi.sh
Made a symlink to the rc0.d
sudo ln -s /etc/init.d/raspi.sh /etc/rc0.d/S01raspi.sh
Now I know so far that the shutdown script is working outside of the shutdown routing by calling itself and the shutdown symlink I made is also working partially because I see the changes in the test.txt file every time I shut down.
Can anyone help me how to solve my problem?
Have you tried with single quotes?
The first link in Google has it
http://malcontentcomics.com/systemsboy/2006/07/send-remote-commands-via-ssh.html
What about the sudo, how do you solve entering the password?
https://superuser.com/questions/117870/ssh-execute-sudo-command
Please check this or other links on the web that have useful information.
I would have send all this in a comment but I cant yet because of reputation.
I have now got the script running by myself. I do not really know why it is now working but I write it down beneath and maybe someone else can clearifiy it.
I don´t think the first two changes at my system makes a difference but I also write it down. In the meanwhile because I do not managed the script to get working I had made a button to shutdown the system manually. Also I made a script which backs the mysql-database up (which is on the Raspberry Pi which I would like to switch off with the script) and copies the backup to the raspberry pi which should switch of the other raspberry automatically via the shutdown-script. This happens with scp and also for the password is a key generated.
I have also changed my script to get a log-message out of the script.
#!/bin/sh
ssh -t -t pi#10.0.0.99 'sudo shutdown -h now' >> /home/osmc/shutdown.log 2>&1
To get it into the shutdown-routine I used:
sudo update-rc.d raspi-b stop 01 0
I hope somebody can say me why my code now worked on the first day but not on the next few days until now.
I structured a command to suspend or shutdown a remote host over ssh. You may find this useful. This may be used to suspend / shutdown a remote computer without an interactive session and yet not keep a terminal busy. You will need to give permissions to the remote user to shutdown / suspend using sudo without a password. Additionally, the local and remote machines should be set up to SSH without an interactive login. The script is more useful for suspending the machine as a suspended machine will not disconnect the terminal.
local_user#hostname:~$ ssh remote_user#remote_host "screen -d -m sudo pm-suspend"
source: कार्यशाला (Kāryaśālā)
EDIT this is fixed. See my answer below.
I have a headless server running transmission-daemon on Angstrom Linux. I am able to SSH into the machine and invoke transmission-daemon via this init script; however, the process terminates as soon as I log out.
The command issued in the script is:
start-stop-daemon --chuid transmission --start --pidfile /var/run/transmission-daemon.pid --make-pidfile --exec /usr/local/bin/transmission-daemon --background -- -f
After starting the daemon via # /etc/init.d/transmission-daemon start, I can verify using ps that the process is owned by the user transmission (which is not the user I am logging in as via SSH).
I've tried every variation of the above command that I am aware of, including:
With and without the --background option for start-stop-daemon
Appending > /dev/null 2>&1 & to the start-stop-daemon command (source -- although there seems to be mixed results in that thread as to whether this is the right approach)
Appending > /dev/null 2>&1 & </dev/null & (source)
Adding & to the end of the command
Using nohup
None of these seems to work -- the result is always the same: the process exits immediately after I close the SSH session.
What can/should I do to keep the daemon running after I disconnect the SSH session?
Have you tried using GNU Screen?
It allows you to keep your session open even if you disconnect (but not if you exit).
It's a simple case of:
apt-get install screen
or
yum install screen
Since I cannot leave comments yet :), here is a good site that explains some functions of Screen, http://www.tecmint.com/screen-command-examples-to-manage-linux-terminals/
I use screens all the time, to do exactly what you are talking about. You open a screen, in the terminal, do what you need to do, then you can log off and your process will still be running.
sudo loginctl enable-linger your_user
# This allows users who are not logged in to run long-running
# service after ssh session ends
This is now resolved. Here's the background: at some point prior to running into this problem, something happened to my $PATH (I don't recall what) and the location where transmission-daemon lived (/sbin) was removed. Under the mistaken impression that transmission-daemon was no longer present on the system, I installed again from an ipk. This is the state the system was in when I initially asked this question.
I don't know why it made a difference, but once I corrected my $PATH and started running transmission-daemon installed at /sbin, everything worked again. The daemon keeps running after I log out.
I'm connecting to my ubuntu server using ssh. I start an encoding program using a command. However, it seems that when my ssh session closes (because I started it on a laptop which went to sleep). Is there a way to avoid this (of course preventing my laptop from sleeping is not a permanent solution).
Run your command with nohup or use screen
nohup is better when your program generate some loging output because it's forward to file and then you can check it, but with screen you can detach ssh session and when you log again you can restore your work-space. For encoding I'll use nohup. It is easier and you probably run it in background, so you really don't need detaching
Screen is the best for you.
screen -S some_name
than run it. Detach it with: ctrl+a d
Next time, attach it with:
screen -rd some_name
You can have more runnning screens. To show the list of them:
screen -ls
Install "screen" on your ubuntu server, that way you can start any program in your session, disconnect the output of the program from your current session and exit.
Later when you connect again to your server, you can restore the program which will continue running and see its progress.
My server deployment script triggers a long-running process through SSH, like so:
ssh host 'install.sh'
Since my internet connection at home is not the best, I can sometimes be disconnected while the install.sh is running. (This is easily simulated by closing the terminal window.) I would really like for the install.sh script to keep running in those cases, so that I don't end up with interrupted apt-get processes and similar nuisances.
The reason why install.sh gets killed seems to be that stdout and stderr are closed when the SSH session is yanked, so writing to them fails. (It's not an issue of SIGHUP, by the way -- using nohup makes no difference.) If I put touch ~/1 && echo this fails && touch ~/2 into install.sh, only ~/1 is created.
So running ssh host 'install.sh &> install.out' solves the problem, but then I lose any "live" progress and error output.
So my question is: What's an easy/idiomatic way to run a process through SSH so that it doesn't crash if SSH dies, but so that I can still see the output as it runs?
Solutions I have tried:
When I run things manually, I use screen for cases like this, but I don't think it will be of much help here because I need to run install.sh automatically from a shell script. Screen seems to be made for interactive use (it complains "Must be connected to a terminal.").
Using install.sh 2>&1 | tee install.out didn't help either (silly of me to think it might).
You can redirect stdout/stderr into install.out and then tail -f it. The following snippet actually works:
touch install.out && # so tail does not bark (race condition)
(install.sh < /dev/null &> install.out &
tail --pid "$!" -F install.out)
But surely there must a less awkward way to do the same thing?
Try using screen:
screen ./install.sh
If your ssh session gets interrupted, you can simply reattach to the session via another ssh connection:
screen -x
You can provide a terminal to your ssh session using the -t switch:
ssh -t server screen ./install.sh
install.sh 2>&1 | tee install.out
if the only issue is not getting stderr. You didn't say exactly why the tee wasn't acceptable. You may need the other nohup/stdin tweaks.