Script exits after certain amount of time - node.js

I have this node.js Discord bot that is currently running in production. I run it locally on my Raspberry Pi (doesn't get used a lot so it's cheaper for me). Previously it worked fine, but now, after some time the script exits without logging anything special. This is the command I run in the Raspbian terminal to start the script:
node KMAR.js > plb_log.log 2> plb_error.log &
I run the same script for another bot, on the same RPi. Whenever I start them at the same time, they seem to crash at the same time aswell. This seems to be 2 weeks (sometimes 3 weeks) after I start. I do make use of node cron, but I only have something scheduled a few times a day. It doesn't seem like this cronjob would have anything to do with it.
With previous versions of the bot this wouldn't happen. However, I couldn't find differences between versions that would cause this behaviour.
If it could help, here's the repo of the project. Older versions are included aswell. https://github.com/jwsteens/plb
To summarize my problem; my code exits not quite randomly, but after a few weeks. I can't figure out why this happens, as I get no error messages and I can't find anything in my code which would lead to this issue.

You can write a script that automatically restarts the bot when it crashes or stops.
#!/bin/bash
while :
do
yournodepath yourbotpath
sleep 1
done
save it as start.sh or something
and then instead of running node . you can do
chmod 777 start.sh
./start.sh
and now it will run forever until you exit the script

Related

Crontab won't launch Node.js Discord bot on Ubuntu

I have a bot running on an Ubuntu machine which I'd like to autorun on boot, and so I'm using crontab's #reboot scheduled line for it:
#reboot ~/Bot/run
It works perfectly for all other scripts I have running on boot on my machine, but for this specific script, every line is seemingly being executed correctly, except for the one involving the actual node . command, which launches my Discord bot. The script works perfectly fine when I run it myself from a terminal. Here's the script in question, run.sh, very simple and short in nature:
#!/bin/bash
cd ~/Bot
echo "Launching Discord Bot"
screen -dmS BOT node .
To sum it up, I first enter the correct working directory, send a basic echo, then create a new detached screen session called BOT, to which I pass the node . command, in which the period . implies the index.js file found in my bot's directory. And yes, I intend the screen session to automatically close if/when the bot's process stops.
As mentioned, it seems to actually run the script as intended, but just failing to launch the node.js server. I've tried adding other test lines to the script, including one that sends a message to said Discord server using a webhook, and it correctly sends the messages when placed before and after the node . line.
I've been troubleshooting this for a little while, and I've tried adding a task as a system service in /etc/systemd/system instead of using Crontab, to no avail. I've also tried setting the cronjob to use a login shell with bash -lc to set the proper environment variables for Crontab, still to no avail. I'm all out of solutions, would anyone know how to solve this issue?

Make chosen version of Elasticsearch run as a service in Linux

I have an issue with later versions of ES, so have to use 7.10.2 currently.
This means that the previous method I used to install ES as a service, i.e. apt-get, doesn't work You can't choose an older version this way: it currently installs 7.16.3.
So I followed the procedure on this page for 7.10: everything worked: I was able to run ES as an app and also as a "daemon". Clearly I could simply put the "daemon" startup line in a script which runs on boot.
But what's the optimum way of turning this "daemon arrangement" into a service which you can control with systemctl, and which starts automatically when the machine boots?
PS I don't want to get involved with Docker. I'm sure that's a useful thing but I'm convinced there is a simpler way of doing it, using available Linux sys tools.
I found a workaround... this doesn't in fact create a service of the "systemd" type which can be controlled by systemctl. There seem to be one or two problems which make this non-trivial.
1) You can't start ES as root! I assume (not sure) that most services are being run by root. Anyway this was something I couldn't find a solution to.
2) I am not sure whether a shell script file called by a service is allowed to end... or should continue endlessly: initially I thought this would be sufficient. This is a shell script (run_es_daemon.sh) which does indeed start up ES (as a daemon process) when run by manually in a terminal. There is no issue to do with the fact that the script ends and you then close the terminal: the daemon process continues to run:
#!/bin/bash
# start ES as a daemon...
cd /home/mike/Elasticsearch/elasticsearch-7.10.2
./bin/elasticsearch -d -p pid
... but it never worked using a xxx.service file in /etc/systemd/system/ (maybe because of 1) above). So I also tried adding these lines under the above ones:
while true
do
echo "bubbles"
sleep 60
done
... didn't work either.
In the end I found a simple workaround solution was to start up the daemon process by using crontab:
#reboot /home/mike/sysadmin/run_es_daemon.sh
... but I'd still like to know how to set it up as a true service, which starts at boot...

Can't run external programm from a udev-started script

I'm trying to set up a automatic PDF-Viewer via a Raspberry Pi.
The problem I'm facing is that the script I started from an udev-rule is doing anything I want, except starting any external program. When I'm runnig both scripts from terminal, everything works fine as well, xpdf is launched with no problems.
This is how my scripts look right now:
startpdf (successfully executed from a udev-rule)
!#/bin/bash
/usr/local/script/pdfscript &
exit 0
pdfscript
!#/bin/bash
mkdir -p /media/usb/stick
sudo mount /dev/usbstick /media/usb/stick
/usr/bin/logger Testing the Script
sudo mkdir -p /usr/local/script/testfolder
/usr/bin/xpdf
Everything is working fine, the testfolder is created and the logger is doing fine as well. The reason for having two scripts is the short amount of time before a udev-started script is terminated.
The only problem is that xpdf won't start. I tried it with libreoffice or any other program too and I don't know what am I missing.
Please help me, It's driving me nuts :(

Unable to open Instruments after repeatedly seeing (RunLoop::Xcrun::TimeoutError) error

I've posted this issue
but believe I'm now running into a new one. We have automated tests that run every 15 minutes on a Jenkins server. While I'm still seeing the run_loop error listed in the link above, approximately once per hour I'm now seeing the following error in the console's output
Xcrun timed out after 3.64 seconds executing
xcrun instruments -s templates
with a timeout of 30
(RunLoop::Xcrun::TimeoutError)
When I see this and try to open Instruments, it says "Instruments cannot be opened at this time" and the only solution I've found so far is to reboot the server. This is problematic because there are several jobs running on this server at once and rebooting the machine every hour is not ideal. After rebooting the machine, Instruments is able to be opened and the tests run successfully for about another hour.
I can provide any further information necessary, just not sure where to go from here since I don't see much about this issue online.
Edit: My apologies, the missing information is
Xcode: 7.1.1
MacOS: 10.10.5
Calabash-Cucumber: 0.17.0
I have experienced this as well on our Jenkins CI machine running El Cap and Xcode 7.2. The CoreSimulator and instruments environment becomes unstable rather quickly.
Before running your tests, try:
# From Ruby
RunLoop::CoreSimulator.terminate_core_simulator_processes
# From the command line
$ bundle exec run-loop simctl manage-processes

How to set up a bash script to run in the background anytime the linux ubuntu server is running

I have written up a simple bash script that will copy the newest image from my ip camera into a directory, rename the file and delete the old file. The script loops every 10 seconds.
I want to have this script start running in the background and run continuously all the time that the server is up.
I understand the part about adding a & to the end of the command will cause it to run in the background.
Is init.d the best place to execute this?
I am running ubuntu server.
This sort of thing is normally done by service scripts, which you would find under /etc/init.d. Depending on the version, that might be a "System V init script", or one of the systemd scripts.
A simple service script of the sort you are asking about would start automatically (based on comments in the script's header that tell what run-levels it would use), create a file under /var/run telling what process-id the script uses (to allow killing it), and run the copying in a loop, calling sleep 10 to space the timing as indicated.
A typical service script should implement "start", "stop", "restart" and "status". Not all do, but there is rarely a good reason to not do this.
On my (Debian) system, there is a README file in the directory which is a good introduction to the topic. There are several tutorials available for the topic. Here are a few:
Linux: How to write a System V init script to start, stop, and restart my own application or service
Writing a Linux Startup Script
Manage System Startup and Boot Processes on Linux with Upstart

Resources