Terminal CLi in Nodejs shutting down unexpectedly for no reason - node.js

I created an algorithm in nodejs using the cli terminal that receives an input of various values and makes the automated process of checking them in an api, so far so good, the problem is that he often stops this process and closes without any error or explanation. I have this problem with the VPS more than with my machine.
I forgot to mention that this process is done in parallel, the entries are separated into arrays and executed simultaneously x times at a time.
I thought it could be a possible memory leak since the process is a bit heavy but I have already done the test with less compliance and the result is the same and if it were he would warn me, the worst of all is that it only happens sometimes. Whoever had a similar problem could tell me how it was solved?
There is no code in the algorithm that closes the program, only after closing the loop, but it didn't have it before and even then it happened.

Related

replace a process bin file when it is running

I have a server program(compile by g++) which is running. And I change some code and compile a new bin file. Without kill the running process, I mv the new created bin to overwrite the old one.
After a while, the server process crashed. Dose it relate to my replace action?
My server is an multi-thread high concurrent server. One crash is segfault, other one is deadlock.
I print all parameters in the core dump file and pass them exactly same to the function which was crashed. But it is OK.
And I carefully watch all thread info in the deadlock core dump, I can not find it is an possibility to cause deadlock.
So I doubt the replacement will cause strange things
According to this question, if swap action is happen, it indeed will generate strange things
For a simple standard program, even if it is currently opened by the running process, moving a new file will first unlink the original file which will remain untouched apart from that.
But for long running servers, many things can happen: some fork new processes and occasionally some can even exec a new fresh version. In that case, you could have different versions running side by side which could or not be supported depending on the change.
Said differently, without more info on what is the server program, how it is designed to run and what was the change, the only answer I can give is maybe.
If you can make sure that you remove ONLY the bin file, and the bin file isn't used by any other process (such as some daemon). Then it doesn't relate to your replace action.

Kernel died, restarting in the middle of my simulation (spyder)

I am using the spyder interface with a python script that works through many time steps in succession. At a random point in my code, the process will terminate and the console will say "kernel died, restarting...". I tried running the same script in pycharm and the process also terminates, seemingly at a random point, with some exit code which I assume means the same thing.
Anyone have any tips on how to get rid of this problem? Even a workaround so I can get some work done. This is incredibly frustrating.
Note: I recently moved and got a new router and internet service, not sure if that might affect things.

Run multiple copies of Speedy or PersistentPerl to be called from Tomcat

I have a modern webapp running under Tomcat, which often needs to call some legacy perl code to get some results. Right now, we wrap these in a call to Runtime.getRuntime().exec() which is working fine.
However, as the webapp gets busier we are noticing that often the perl is timing out and we need to control this.
I am using commons-pool to ensure that only X number of copies can be run at a time, and threads will queue up nicely for a perl instance when they need one, timing out after Y seconds and returning an error (this is fine, the client will just retry).
However we still have the problem that Perl takes a long time to start up, interpret the script, execute and return. At busy times we are doing this 30-50 times per second. It's a beefy machine but it's starting to struggle.
I have read up on Speedy and PersistentPerl and am considering holding open a copy of this in memory for each object in my pool, so that we do not need to open and close the Perl each time.
Is this a good idea? Any tips for how to go about doing this?
Those approaches should reduce the overhead from the start up time of your script. If the script is something that can be run as a CGI program then you might be better offer making it work with Plack and running it with a PSGI server. Your Tomcat application could collect and send the request parameters to your script and/or "web application" running in the background.

Open MPI Virtual Timer Expired

I'm using Open MPI 1.8 on Gentoo 3.13 to manage the data transfer from one program to another via a server/client concept. Both the server and the clients are launched via mpiexec as separate processes. After some days (this is quite a heavy computation...), I sometimes receive the error
mpiexec noticed that process rank 0 with PID 17213 on node XXX exited on signal 26 (Virtual timer expired).
Unfortunately, the error is not reproducible in a reliable way, i.e., the error does not appear always and not always at the same point in the program flow. I also experienced this error on other machines. I already tracked the issue down to the ITIMER_VIRTUAL which, upon expiration, delivers SIGVTALRM (see, e.g., http://man7.org/linux/man-pages/man2/setitimer.2.html). In the BUGS section of the man page, it says that
Under very heavy loading, an ITIMER_REAL timer may expire before the signal from a previous expiration has been delivered. The second signal in such an event will be lost.
I wonder if something similar might also hold for ITIMER_VIRTUAL? Did anyone experience similar problems and can confirm the error?
The only workaround I can think of is to invoke setitimer(...) and try to manipulate the timer myself. However, I hope there is another way since I can't always modify the clients' source code. Any suggestions?
Since this question has not been answered officially, I will do it on behalf of Hristo (#HristoIliev: I hope this is ok for you). As was pointed out in the first comment to my question, there is not a single hint in the Open MPI source code which can have caused the virtual timer expiration. Indeed, the timer problem was related to a third-party library which made the code crash after an unpredictable time (depending on the current loading of the machine).

How to pause a php script launched with crontab?

I have a PHP scraper running every night on a very large site. Crontab launches the script at 2am and pkill it at 7am. Now I am concerned that brutally killing the script might result in data loss. Let's say that crontab calls the script off while the script is busy writing my scraped data into the database, then the next day the database will refuse that last/first record because it is already present (even if not completely).
Is there any way I can freeze the script with crontab? (That is, without adding a sleep() to my script)
Let's say that crontab calls the script off while the script is busy writing my scraped data into the database
That would be a problem, since you will run into some transaction timeout or something, if you stop your process externally. A better way would be to let the script halt/pause on its own. You could for example define some marker file that is checked by the script periodically so that the script can halt/pause in a controlled way.
Having one large cronjob that can't be interrupted is usually a sign of bad design for a number of reasons.
Most notably, you can't interrupt the run for no reason whatsoever or you'll end up with corrupted data. This can become a big problem in case you have an unexpected power loss or server crash.
Also, it doesn't scale. If you need to process more data, you can't scale it to multiple servers. If you have run times of a few hours now, you may end up exhausting a complete server very soon.
I would recommend to seriously rethink the functionality of this cronjob and restructure it so you have a number of smaller tasks that are queued up somewhere. (It can even be the database.) You could then mask the SIGINT and SIGTERM signals when processing a single task and check the received signals in between tasks. This will allow you to notify the process using either of the aforementioned and have it shut down gracefully.
That being said, things do break and servers do crash. I also urge you to work out plans for data recovery in case the cronjob breaks down while working on something.

Resources