nodejs setTimout loop stopped after many weeks of iterations - node.js

Solved : It's a node bug. Happens after ~25 days (2^32 milliseconds), See answer for details.
Is there a maximum number of iterations of this cycle?
function do_every_x_seconds() {
//Do some basic things to get x
setTimeout( do_every_x_seconds, 1000 * x );
};
As I understand, this is considered a "best practice" way of getting things to run periodically, so I very much doubt it.
I'm running an express server on Ubuntu with a number of timeout loops .
One loop that runs every second and basically prints the timestamp.
One that calls an external http request every 5 seconds
and one that runs every X 30 to 300 seconds.
It all seemes to work well enough. However after 25 days without any usage, and several million iterations later, the node instance is still up, but all three of the setTimout loops have stopped. No error messages are reported at all.
Even stranger is that the Express server is still up, and I can load http sites which prints to the same console as where the periodic timestamp was being printed.
I'm not sure if its related, but I also run nodejs with the --expose-gc flag and perform periodic garbage collection and to monitor that memory is in acceptable ranges.
It is a development server, so I have left the instance up in case there is some advice on what I can do to look further into the issue.
Could it be that somehow the event-loop dropped all it's timers?

I have a similar problem with setInterval().
I think it may be caused by the following bug in Node.js, which seem to have been fixed recently: setInterval callback function unexpected halt #22149
Update: it seems the fix has been released in Node.js 10.9.0.

I think the problem is that you are relying on setTimeout to be active over days. setTimeout is great for periodic running of functions, but I don't think you should trust it over extended time periods. Consider this question: can setInterval drift over time? and one of its linked issues: setInterval interval includes duration of callback #7346.
If you need to have things happen intermittently at particular times, a better way to attack this would be to schedule cron tasks that perform the tasks instead. They are more resilient and failures are recorded at a system level in the journal rather than from within the node process.
A good related answer/question is Node.js setTimeout for 24 hours - any caveats? which mentions using the npm package cron to do task scheduling.

Related

Why is Python consistently struggling to keep up with constant generation of asyncio tasks?

I have a Python project with a server that distributes work to one or more clients. Each client is given a number of assignments which contain parameters for querying a target API. This includes a maximum number of requests per second they can make with a given API key. The clients process the response and send the results back to the server to store into a database.
Both the server and clients use Tornado for asynchronous networking. My initial implementation for the clients relied on the PeriodicCallback to ensure that n-number of calls to the API would occur. I thought that this was working properly as my tests would last 1-2 minutes.
I added some telemetry to collect statistics on performance and noticed that the clients were actually having issues after almost exactly 2 minutes of runtime. I had set the API requests to 20 per second (the maximum allowed by the API itself) which the clients could reliably hit. However, after 2 minutes performance would fluctuate between 12 and 18 requests per second. The number of active tasks steadily increased until it hit the maximum amount of active assignments (100) given from the server and the HTTP request time to the API was reported by Tornado to go from 0.2-0.5 seconds to 6-10 seconds. Performance is steady if I only do 14 requests per second. Anything higher than 15 requests will experience issues 2-3 minutes after starting. Logs can be seen here. Notice how the column of "Active Queries" is steady until 01:19:26. I've truncated the log to demonstrate
I believed the issue was the use of a single process on the client to handle both communication to the server and the API. I proceeded to split the primary process into several different processes. One handles all communication to the server, one (or more) handles queries to the API, another processes API responses into a flattened class, and finally a multiprocessing Manager for Queues. The performance issues were still present.
I thought that, perhaps, Tornado was the bottleneck and decided to refactor. I chose aiohttp and uvloop. I split the primary process in a similar manner to that in the previous attempt. Unfortunately, performance issues are unchanged.
I took both refactors and enabled them to split work into several querying processes. However, no matter how much you split the work, you still encounter problems after 2-3 minutes.
I am using both Python 3.7 and 3.8 on MacOS and Linux.
At this point, it does not appear to be a limitation of a single package. I've thought about the following:
Python's asyncio library cannot handle more than 15 coroutines/tasks being generated per second
I doubt that this is true given that different libraries claim to be able to handle several thousand messages per second simultaneously. Also, we can hit 20 requests per second just fine at the start with very consistent results.
The API is unable to handle more than 15 requests from a single client IP
This is unlikely as I am not the only user of the API and I can request 20 times per second fairly consistently over an extended period of time if I over-subscribe processes to query from the API.
There is a system configuration causing the limitation
I've tried both MacOS and Debian which yield the same results. It's possible that's it a *nix problem.
Variations in responses cause a backlog which grows linearly until it cannot be tackled fast enough
Sometimes responses from the API grow and shrink between 0.2 and 1.2 seconds. The number of active tasks returned by asyncio.all_tasks remains consistent in the telemetry data. If this were true, we wouldn't be consistently encountering the issue at the same time every time.
We're overtaxing the hardware with the number of tasks generated per second and causing thermal throttling
Although CPU temperatures spike, neither MacOS nor Linux report any thermal throttling in the logs. We are not hitting more than 80% CPU utilization on a single core.
At this point, I'm not sure what's causing it and have considered refactoring the clients into a different language (perhaps C++ with Boost libraries). Before I dive into something so foolish, I wanted to ask if I'm missing something simple.
Conclusion
Performance appears to vary wildly depending on time of day. It's likely to be the API.
How this conclusion was made
I created a new project to demonstrate the capabilities of asyncio and determine if it's the bottleneck. This project takes two websites, one to act as the baseline and the other is the target API, and runs through different methods of testing:
Spawn one process per core, pass a semaphore, and query up to n-times per second
Create a single event loop and create n-number of tasks per second
Create multiple processes with an event loop each to distribute the work, with each loop performing (n-number / processes) tasks per second
(Note that spawning processes is incredibly slow and often commented out unless using high-end desktop processors with 12 or more cores)
The baseline website would be queried up to 50 times per second. asyncio could complete 30 tasks per second reliably for an extended period, with each task completing their run in 0.01 to 0.02 seconds. Responses were very consistent.
The target website would be queried up to 20 times per second. Sometimes asyncio would struggle despite circumstances being identical (JSON handling, dumping response data to queue, returning immediately, no CPU-bound processing). However, results varied between tests and could not always be reproduced. Responses would be under 0.4 seconds initially but quickly increase to 4-10 seconds per request. 10-20 requests would return as complete per second.
As an alternative method, I chose a parent URI for the target website. This URI wouldn't have a large query to their database but instead be served back with a static JSON response. Responses bounced between 0.06 seconds to 2.5-4.5 seconds. However, 30-40 responses would be completed per second.
Splitting requests across processes with their own event loop would decrease response time in the upper-bound range by almost half, but still took more than one second each to complete.
The inability to reproduce consistent results every time from the target website would indicate that it's a performance issue on their end.

What is better way to making longer delay inside a series of tasks?

I'm trying to build a workflow system, which will process a series of tasks & delays. Delay can be changed or removed from a running workflow.
What is the better way to making longer delay inside a series of tasks? (Like 3-4 months). Right now two ways are pocking around my head:
Pre-calculating & saving delay time. Setup a scheduler that will check delay repeatedly after a specific interval(1 minute maybe). This will make a lot of database queries, but the delay can be changed instantly.
Schedule a job for a delay. This can reduce a lot of database queries &, but the problem is maintaining & changing delay in these long-running jobs. Also, these jobs need to survive a server crash or restart.
Right now I'm not sure how to do it in a better way and still studying about it. If anyone has a similar experience, please share.
You can store the tasks into the database, like :
{
_id: String,
status: Enum,
executionTime: timestamp,
}
When you declare a new task, push a new entry into the DB.
At your server start, or when a new task is declared, create a setTimeout that will wake up your node.js when it's necessary.
Optimization
To avoid having X setTimeout, with X the number of task to execute. Keep only one setTimeout, with the time to wait equals to the closest task to execute.
For example, you have three task, one must run in 1 hour, one in 2 hour and one in 3 hour. Use a setTimeout of 1 hour. When it get triggered, it execute the task 1 and then look at the remaining tasks to re-run.

How to call a function every n milliseconds in "real world" time exactly?

If I understand correctly, setInterval(() => console.log('hello world'), 1000) will place the function to some queue of tasks to run. But if there are other tasks in-front of it, it won't run exactly at 1000 millisecond or every time.
In a single complex program, is it possible to also make calls to some function every n millisecond exactly in real world time with node.js ?
If I understand correctly, setInterval(() => console.log('hello world'), 1000) will place the function to some queue of tasks to run. But if there are other tasks in-front of it, it won't run exactly at 1000 millisecond or every time.
That is correct. It won't run exactly at the desired time if node.js happens to be busy doing something else when the timer is ready to run. node.js will wait until it finishes it's other task before running the timer callback. You can think of node.js as if it has a one-track mind (can only do one thing at a time) and timers don't ever interrupt existing tasks that are running.
In a single complex program, is it possible to also make calls to some function every n millisecond exactly in real world time with node.js ?
No, it is not possible to do that in node.js. node.js runs your Javascript as single-threaded, it's event driven and not-preemptive. All of these mean that you cannot rely on code running at a precise real-world time.
What happens under the covers in node.js is that you set a timer for a specific time in the future. That timer goes is registered with the node.js event loop so that each time it gets through the event loop, it will check if there are any pending timers. But, it only gets through the event loop when other code that was running before the timer was ready to fire finishes running. Here's the sequence of events:
Run some code
Set timer for some time in the future (say time X)
Run some more code
Nothing to do for awhile
Run some more code (while this code is running, time X passes - the time for your timer to run)
Previous block of code finishes running and control returns back to the node.js event loop at time X + n (some time after the timer X was supposed to fire).
Event loop checks to see if there are any pending timers. It finds a timer and calls its callback at time X + n.
So, the only way that your timer gets called at approximately time X is if node.js has nothing else to do at exactly time X. If your program is ever doing anything else, you can't guarantee that your program will be free at exactly time X to run the timer exactly when you want it to run. node.js is NOT a real-time system in any way. single-threaded and non-pre-emptive mean that a timer may have to wait for node.js to finish some other things before it gets to run and thus there is no guarantee that the timer will run exactly on time. Instead, it will run as not before time X when the interpreter is next free to return back to the event loop (done running whatever else might have been running at the time). This could be close to time X or it could be a significant time after time X.
If you really need something to run precisely at a specific time, then you likely need a pre-emptive system (not node.js) that is much more real-time than node.js is.
You could create a "work-around" in node.js by firing up another node.js process (you could use the child_process module) and start a program in that other process that has nothing else to do except serve your timer and execute the code associated with that timer. Then, at least you timer won't be pre-empted by some other Javascript task that might be running and will get to run pretty close to the desired time. Keep in mind that even this work-around still isn't a true real-time system, but might serve some purposes.
Otherwise, you probably want to write this in a more real-time system language that has pre-emptive timers (probably even with thread priorities).
But if there are other tasks in-front of it, it won't run exactly at 1000 millisecond or every time.
Your question is actually operating system specific, assuming the computer is running some (usual) operating system (like Windows, Android, Linux, MacOSX, etc...). I recommend reading Operating Systems: Three Easy Pieces to learn more.
In practice, your computer has many other processes managed by its operating system. Some of them might be running. Your computer might be in a situation where it is loaded enough by other processes to the point of not being able to run your tasks or threads exactly every second. Read about thrashing.
You might want to use some genuine real-time operating system. But then, node.js probably won't run on it.
How to call a function every n milliseconds in “real world” time exactly?
You cannot do that reliably. Because your node.js process (it is actually single threaded, at the system threads level, see pthreads(7) and jfriend00's answer) might not get enough resources from your OS (so if other processes are loading your computer too much, node.js would be starved and won't be able to progress like you want; be also aware of possible priority inversions).
On Linux, see also shed(7), chrt(1), renice(1)
I suggest to make a cron which will run at every n seconds. If your program is complex and it may take more time then you can go with async.
npm install cron
var CronJob = require('cron').CronJob;
new CronJob('* * * * * *', function() {
console.log('You will see this message every second');
callYourFunc();
}, null, true, 'America/Los_Angeles');
For more read this link
Perhaps you could spawn a worker thread and block it while it’s waiting to do the work, in the way suggested by CertainPerformance in the comments. It may not be the most elegant way to do it but at least you can put the blocking logic aside so that it doesn’t affect the rest of the application.
Check out the example in the docs if you’re unfamiliar with the cluster module: https://nodejs.org/docs/latest-v10.x/api/cluster.html

Querying multiple sensors regularly using NodeJS

I need to fetch the values of about 200 sensors every 15 seconds or so. To fetch the values I simply need to make an HTTP call with basic authentication and parse the response. The catch is that these sensors might be on slow connection so I need to wait at least 5 seconds for one sensor (but usually they respond a lot quicker, but there's always some that are slow and timeout).
So right now I have the following setup for that:
There is a NodeJS process that is connected to my DB and knows all about the sensors. It checks regularly to see if there are new ones or there are some that got deleted. It spawns a child process for every sensor, and in case the child process dies it restarts it. Also it kills it if the sensor gets deleted. The child process makes the HTTP call to its sensor with a 5 second timeout value and if it receives the value, saves it to Redis. Also it is in an infinite loop with a 15 seconds setTimeout. And there is a third process that copies all the values from Redis to the main MySQL DB.
So that has been a working solution for half a year, but after a major system upgrade (from Ubuntu 14.04 to 18.04 and thus every package upgraded as well) it seems to leak some memory and I can't seem to figure out where.
After starting out, the processes summarised take about 1.5GB of memory. But after a day or so this goes up to 3GB and so on and before running out of memory I need to kill all node processes and restart the whole thing.
So now I am trying to figure out more efficient methods to achieve the same result (query around 2-300 URLs every 15 sec and store the result in MySQL). At the moment I'm thinking of ditching Redis and the child processes will communicate with their master process and the master process will write to MySQL directly. This way I don't need to load the Redis library into every child process and that might save me some time.
So I need ideas on how to reduce memory usage for that application (I'm limited to PHP and NodeJS, mainly because of my knowledge, so writing a native daemon might be out of the question)
Thanks!
The solution was easier than I thought. I had to rewrite the child process into a native bash script and that brought down the memory usage to almost being zero.

LoadRunner and the need for pacing

Running a single script with only two users as a single scenario without any pacing, just think time set to 3 seconds and random (50%-150%) I experience that the web app server runs of of memory after 10 minutes every time (I have run the test several times, and it happens at the same time every time).
First I thouhgt this was a memory leak in the application, but after some thought I figured it might have to do with the scenario design.
The entire script having just one action including log in and log out within the only action block takes about 50 seconds to run and I have the default as soon as the previous iteration ends set not the with delay after the previous iteration ends or fixed/random intervalls set.
Could not using fixed/random intervalls cause this "memory leak" to happen? I guess non of the settings mentioned would actually start a new iteration before the one before ends, this obvioulsy leading to accumulation of memory on the server resulting in this "memory leak". But with no pacing set is there a risk for this to happen?
And having no iterations in my script, could I still be using pacing?
To answer your last question: NO.
Pacing is explicitly used when a new iteration starts. The iteration start is delayed according to pacing settings.
Speculation/Conclusions:
If the web-server really runs out of memory after 10 minutes, and you only have 2 vu's, you have a problem on the web-server side. One could manually achieve this 2vu load and crash the web-server. The pacing in the scripts, or manual user speeds are irrelevant. If the web-server can be crashed from remote, it has bugs that need fixing.
Suggestion:
Try running the scenario with 4 users. Do you get OUT OF MEMORY on the web-server after 5 mins?
If there really is a leak, your script/scenario shouldn't be causing it, but I would think that you could potentially cause it to appear to be a problem sooner depending on how you run it.
For example, let's say with 5 users and reasonable pacing and think times, the server doesn't die for 16 hours. But with 50 users it dies in 2 hours. You haven't caused the problem, just exposed it sooner.
i hope its web server problem.pacing is nothing but a time gap in between iterations,it's not effect actions or transactions in your script

Resources