Why my bot.py code using selenoid isn't running on ubuntu server when I close my ssh bash window? - python-3.x

I have set up on local machine an ubuntu server on which I run bot.py code using ssh bash terminal. My bot.py get's url from my contacts and visits webpages using docker and selenoid. I have set up docker and selenoid and they work well. When I run:
$ sudo ./myscript_ro_run_bot.sh
[inside myscript_ro_run_bot.sh]:
#!/bin/bash
while true
do
echo "running bot.py"
nohup sudo python3 bot.py # nohup to run at background
wait
echo "bot.py finished"
echo "running bot1.py"
nohup sudo python3 bot1.py
wait
echo "bot1.py finished"
.....
echo "runnning bot5.py"
nohup sudo python3 bot5.py
sleep 10m
done
(I have 5 bot.py files)
I can see on local machine messages in Telegram that (myscript_ro_run_bot.sh) doing it's job well that sites have been visited and I get rewarded. Even on local machine the (myscript_ro_run_bot.sh) can be ran 24/7 hours (indefinitely). But I want to run on server 24/7 hours. The problem is when I close ssh bash window manager I see on local machine Telegram that nothing happening, I don't get messages. Here is the trick when I connect to my server with ssh again after 5 or an hour and only after reconnection I start receiving messages in telegram. I can see job running in server with command:
$ htop
that my command sudo python3 bot.py is running
When I used:
$ sudo crontab -e
#reboot /home/user/myscript_ro_run_bot.sh >> /home/user/myscrit_to_run_bot.log
After reboot I connected to server with ssh and got result from myscrit_to_run_bot.log:
running bot.py
bot.py finished
running bot1.py
bot1.py finished
running bot3.py
But I didn't get any messages in telegram after reconnection.
Whereas I run my script manually and reconnect to server I get messages in telegram.
Can anybody help me how to solve the issue? I want sudo ./myscript_ro_run_bot.sh to run even I close ssh bash terminal.
If you want me to provide more details please write commands as well (detail instruction) because I am new on coding and linux.
I appreciate your help

Try to use screen or tmux for launching your process https://wiki.archlinux.org/title/Tmux
run tmux
run ./your_program
enter ctrl+b and then d
After this, your process will be run in the background, and you can close ssh-connection.
When you need back, you can run tmux attach

Related

Is there a way to make crontab run a gnu screen session?

I have a discord bot running on a raspberry pi that i need to restart every day, I'm trying to do this through crontab but with no luck.
It manages to kill the active screen processes but never starts an instance, not that I can see with "screen -ls" and I can tell that it doesn't create one that I can't see as the bot itself does not come online.
Here is the script:
#!/bin/bash
sudo pkill screen
sleep 2
screen -dmS discordBot
sleep 2
screen -S "discordBot" -X stuff "node discordBot/NEWNEWNEWN\n"
Here is also the crontab:
0 0 * * * /bin/bash /home/admin/discordBot/script.sh
Is it possible to have crontab run a screen session? And if so how?
Previously I tried putting the screen command stright into cron but now I have it in a bash script instead.
If I run the script in the terminal it works perfectly, it’s just cron where it fails. Also replacing "screen" with the full path "/usr/bin/screen" does not change anything.
So the best way of doing it, without investigating the underlying problem would be to create a systemd service and putting a restart command into cron.
 
/etc/systemd/system/mydiscordbot.service:
[Unit]
Description=A very simple discord bot
[Service]
Type=simple
ExecStart=node /path/to/my/discordBot.js
[Install]
WantedBy=multi-user.target
Now you can run your bot with systemctl start mydiscordbot and can view your logs with journalctl --follow -u mydiscord bot
Now you only need to put
45 2 * * * systemctl restart discordbot
into root's crontab and you should be good to go.
You probably should also write the logs into a logfile, maybe /var/log/mydiscordbot.log, but how and if you do that is up to you.
OLD/WRONG ANSWER:
cron is run with a minimal environment, and depending on your os, /usr/bin/ is probably not in the $PATH var. screen is mostlikely located at /usr/bin/screen, so cron can't run it because it can't find the screen binary. try replacing screen in your script with /usr/bin/screen
But the other question here is: Why does your discord bot need to be restarted every day. I run multiple bots with 100k+ users, and they don't need to be restarted at all. Maybe you should open a new question with the error and some code snippets.
I don't know what os your rpi is running, but I had a similar issue earlier today trying to get vms on a server to launch a terminal and run a python script with crontab. I found a very easily solution to automatically restart it on crashes, and to run something simply in the background. Don't rely on crontab or an init service. If your rpi as an x server running, or anything that can be launched on session start, there is a simple shell solution. Make a bash script like
while :; do
/my/script/to/keep/running/here -foo -bar -baz >> /var/log/myscriptlog 2>&1
done
and then you would start it on .xprofile or some similar startup operation like
/path/to/launcher.sh &
to have it run the background. It will restart automatically everytime it closes if started in something like .xprofile, .xinitrc, or anything ran at startup.
Then maybe make a cronjob to restart the raspberry pi or whatever system is running the script but this script wil restart the service whenever it's closed anyway. Maybe you can put that cronjob on a script but I don't think it would launch the GUI.
Some other things you can do to launch a GUI terminal in my research with cronjob that you can try, though they didn't work for my situation on these custom linux vms, and that I read was a security risk to do this from a cronjob, is you can specify the display.
* * * * * DISPLAY=:0 /your/gui/stuff/here
but you would would need to make sure the user has the right permissions and it's considered an unsafe hack to even do this as far as I have read.
for my issue, I had to launch a terminal that stayed open, and then changed to the directory of a python script and ran the script, so that the local files in directory would be called in the python script. here is a rough example of the "launcher.sh" I called from the startup method this strange linux distro used lol.
#!/bin/sh
while :; do
/usr/bin/urxvt -e /bin/bash -c "cd /path/to/project && python loader.py"
done
Also check this source for process management
https://mywiki.wooledge.org/ProcessManagement

Minecraft Linux Server | Start.sh problem

I have a Minecraft Server running on Linux.
I use to start the server, a start.sh file with following content:
(This content starts a screen session and the minecraft server)
screen -S {ScreenSession} java -Xmx2G -Xms2G -jar spigot-1.18.1.jar
If I use /restart ingame, the screen session will end and the server won't start. So I have to go into the Linux Server and start the Minecraft Server again.
My question:
How can I make it so, if I use /restart that the server will restart with a active screen session.
If have tried many things.
I hope someone can help me,
~Kitty Cat Craft
There is multiple way to achieve what you want.
If you have lot of servers, you can use a quick bash script with an auto restart like that:
#!/bin/sh
while true
do
java -Xmx2G -Xms2G -jar spigot-1.18.1.jar --nogui
sleep 5
done
When you will stop, it will wait 5 seconds then restart.
With this, you can use: screen -dmS <screenName> sh myScript.sh which will run the script into another screen. It's usefull when you run it from a script which run lot of server, like that:
screen -dmS srv1 sh srv1.sh
screen -dmS srv2 sh srv2.sh
screen -dmS srv3 sh srv3.sh
You can also, if you have only one server, just firstly use screen -S screenName. Then, when you are in the screen, run the script that restart automatically (the script that I gave at first).
Also, prefer use /stop than /restart, because spigot will try to find the script. And if it success, it will run a second time the same script, and so will have ghost process.

EC2 ssh broken pipe terminates running process

I'am using a EC2 instance to run a large job that I estimate to take approx 24 hours to complete. I get the same issue described here ssh broken pipe ec2
I followed the suggestion/solutions in the above post and in my ssh session shell I launched my python program by the following command:
nohup python myapplication.py > myprogram.out 2>myprogram.err
Once I did this the connection remained intact longer than if I didn't use the nohup but it eventually fails with broken pipe error and I'm back to square one. The process 'python myapplication.py' is terminated as a result.
Any ideas on what is happening and what I can do to prevent this from occuring?
You should try screen.
Install
Ubuntu:
apt-get install screen
CentOS:
yum install screen
Usage
Start a new screen session by
$> screen
List all screen sessions you had created
$>screen -ls
There is a screen on:
23340.pts-0.2yourserver (Detached)
1 Socket in /var/run/screen/S-root.
Next, restore your screen
$> screen -R 23340
$> screen -R <screen-id>
A simple solution is to send the process to the background by appending an ampersand & to your command:
nohup python myapplication.py > myprogram.out 2>myprogram.err &
The process will continue to run even if you close your SSH session. You can always check progress by grabbing the tail of your output files:
tail -n 20 myprogram.out
tail -n 20 myprogram.err
I actually ended up fixing this accidentally with a router configuration, allowing all ICMP packets. I allowed all ICMP packets to diagnose a strange issue with some websites loading slowly randomly, and I noticed none of my SSH terminals died anymore.
I'm using a Ubiquiti EdgeRouter 4, so I followed this guide here https://community.ubnt.com/t5/EdgeRouter/EdgeRouter-GUI-Tutorial-Allow-ICMP-ping/td-p/1495130
Of course you'll have to follow your own router's unique instructions to allow ICMP through the firewall.

vagrant ssh -c and keeping a background process running after connection closed

I am writing a script to start and background a process inside a vagrant machine. It seems like every time the script ends and the ssh session ends, the background process also ends.
Here's the command I am running:
vagrant ssh -c "cd /vagrant/src; nohup python hello.py > hello.out > 2>&1 &"
hello.py is actually just a flask development server. If I were to login to ssh interactively and run the nohup command manually, after I close the session, the server will continue to run. However, if I were to run it via vagrant ssh -c, it's almost as if the command never ran at all (i.e. no hello.out file created). What is the difference between running it manually and through vagrant ssh -c, and how to fix it so that it works?
I faced the same problem when trying to run Django application as a daemon. I don't know why, but adding a "sleep 1" behind works for me.
vagrant ssh -c "nohup python manage.py runserver & sleep 1"
Running nohup inside the ssh command did not work for me when running wireshark. This did:
nohup vagrant ssh -c "wireshark" &

Use SSH to start a background process on a remote server, and exit session

I am using SSH to start a background process on a remote server. This is what I have at the moment:
ssh remote_user#server.com "nohup process &"
This works, in that the process does start. But the SSH session itself does not end until I hit Ctr-C.
When I hit Ctr-C, the remote process continues to run in the background.
I would like to place the ssh command in a script that I can run locally, so I would like the ssh session to exit automatically once the remote process has started.
Is there a way to make this happen?
The "-f" option to ssh tells ssh to run the remote command in the background and to return immediately. E.g.,
ssh -f user#host "echo foo; sleep 5; echo bar"
If you type the above, you will get your shell prompt back immediately, you will then see "foo" output. Five seconds later you will then see "bar" output. In the meantime, you could have been using the shell.
When using nohup, make sure you also redirect stdin, stdout and stderr:
ssh user#server 'DISPLAY=:0 nohup xeyes < /dev/null > std.out 2> std.err &'
In this way you will be completely detached from the remote process. Be carefull with using ssh -f user#host... since that will only put the ssh process in the background on the calling side. You can verify this by running a ps -aux | grep ssh on the calling machine and this will show you that the ssh call is still active, but just put in the background.
In my example above I use DISPLAY=:0 since xeyes is an X11 program and I want it started on the remote machine.
You could use screen to run your process on this screen, detach from screen Ctrl-a :detach and exit your current session without problem. Then you can reconnect to SSH and attach to this screen again to continue with your task or check if is finished.
Or you can send the command to an already running screen. Your local script should look like this:
ssh remote_user#server.com
screen -dmS new_screen sh
screen -S new_screen -p 0 -X stuff $'nohup process \n'
exit
For more info see this tutorial
Well this question is almost 10 years old, but I recently had to launch a very long script (taking several hours to complete) on a remote server and I found a way using the crontab.
If can edit your user's crontab on the remote server, connect with ssh to the server, edit the crontab and add an entry that will start your script the next minute. Let's say it's 15h03. Add this line :
4 15 * * * /path/to/your/script.sh
save your crontab, wait a minute for the script to be launched. Then edit again your crontab to remove this entry.
You can then safely exit ssh, even shut down your computer while the script is running.

Resources