Querying multiple sensors regularly using NodeJS - node.js

I need to fetch the values of about 200 sensors every 15 seconds or so. To fetch the values I simply need to make an HTTP call with basic authentication and parse the response. The catch is that these sensors might be on slow connection so I need to wait at least 5 seconds for one sensor (but usually they respond a lot quicker, but there's always some that are slow and timeout).
So right now I have the following setup for that:
There is a NodeJS process that is connected to my DB and knows all about the sensors. It checks regularly to see if there are new ones or there are some that got deleted. It spawns a child process for every sensor, and in case the child process dies it restarts it. Also it kills it if the sensor gets deleted. The child process makes the HTTP call to its sensor with a 5 second timeout value and if it receives the value, saves it to Redis. Also it is in an infinite loop with a 15 seconds setTimeout. And there is a third process that copies all the values from Redis to the main MySQL DB.
So that has been a working solution for half a year, but after a major system upgrade (from Ubuntu 14.04 to 18.04 and thus every package upgraded as well) it seems to leak some memory and I can't seem to figure out where.
After starting out, the processes summarised take about 1.5GB of memory. But after a day or so this goes up to 3GB and so on and before running out of memory I need to kill all node processes and restart the whole thing.
So now I am trying to figure out more efficient methods to achieve the same result (query around 2-300 URLs every 15 sec and store the result in MySQL). At the moment I'm thinking of ditching Redis and the child processes will communicate with their master process and the master process will write to MySQL directly. This way I don't need to load the Redis library into every child process and that might save me some time.
So I need ideas on how to reduce memory usage for that application (I'm limited to PHP and NodeJS, mainly because of my knowledge, so writing a native daemon might be out of the question)
Thanks!

The solution was easier than I thought. I had to rewrite the child process into a native bash script and that brought down the memory usage to almost being zero.

Related

nodejs setTimout loop stopped after many weeks of iterations

Solved : It's a node bug. Happens after ~25 days (2^32 milliseconds), See answer for details.
Is there a maximum number of iterations of this cycle?
function do_every_x_seconds() {
//Do some basic things to get x
setTimeout( do_every_x_seconds, 1000 * x );
};
As I understand, this is considered a "best practice" way of getting things to run periodically, so I very much doubt it.
I'm running an express server on Ubuntu with a number of timeout loops .
One loop that runs every second and basically prints the timestamp.
One that calls an external http request every 5 seconds
and one that runs every X 30 to 300 seconds.
It all seemes to work well enough. However after 25 days without any usage, and several million iterations later, the node instance is still up, but all three of the setTimout loops have stopped. No error messages are reported at all.
Even stranger is that the Express server is still up, and I can load http sites which prints to the same console as where the periodic timestamp was being printed.
I'm not sure if its related, but I also run nodejs with the --expose-gc flag and perform periodic garbage collection and to monitor that memory is in acceptable ranges.
It is a development server, so I have left the instance up in case there is some advice on what I can do to look further into the issue.
Could it be that somehow the event-loop dropped all it's timers?
I have a similar problem with setInterval().
I think it may be caused by the following bug in Node.js, which seem to have been fixed recently: setInterval callback function unexpected halt #22149
Update: it seems the fix has been released in Node.js 10.9.0.
I think the problem is that you are relying on setTimeout to be active over days. setTimeout is great for periodic running of functions, but I don't think you should trust it over extended time periods. Consider this question: can setInterval drift over time? and one of its linked issues: setInterval interval includes duration of callback #7346.
If you need to have things happen intermittently at particular times, a better way to attack this would be to schedule cron tasks that perform the tasks instead. They are more resilient and failures are recorded at a system level in the journal rather than from within the node process.
A good related answer/question is Node.js setTimeout for 24 hours - any caveats? which mentions using the npm package cron to do task scheduling.

How does Erlang sleep (at night?)

I want to run a small clean up process every few hours on an Erlang server.
I know of the timer module. I saw an example in a tutorial used chained timer:sleep commands to wait for an event that would occur multiple days later, which I found strange. I understand that Erlang process are unique compared to those in other languages, but the idea of a process/thread sleeping for days, weeks, and even months at a time seemed odd.
So I set out to find out the details of what sleeping actually does. The closest I found was a blog post mentioning that sleep is implemented with a receive timeout, but that still left the question:
What do these sleep/sleep-like functions actually do?
Is my process taking up resources as it sleeps? Would having thousands of sleeping process use as many resources, as say, thousands of process servicing a recursive call that did nothing? Is there any performance penalty from repeatedly sleeping within processes, or sleeping for long periods of time? Is the VM constantly expending resources to see if the conditions to end the processes' sleep are up?
And as a side note, I'd appreciate if someone could comment on if there is a better way than sleeping to pause for hours or days at a time?
That is the Karma of any erlang process: it waits or dies :o)
when a process is spawned, it start executing until the last execution line, and die, returning the last evaluation.
To keep a process alive, there is no other solution to recursively loop in a never ending succession of calls.
of course there are several conditions that make it stop or sleep:
end of the loop: the process received a message which tell him to
stop recursion
a receive bloc: the process will wait until a message
matching one entry in the receive bloc is posted in the message
queue.
The VM scheduler stop it temporarily to let access to the CPU
to other processes
in the 2 last cases the execution will restart under the responsibility of the VM scheduler.
while waiting it uses no CPU bandwidth, but keeps the exact same memory layout it had when it started waiting. The Erlang OTP offers some means to reduce this memory layout to the minimum using the hibernate option (see the documentation of gen_serevr or gen_fsm, but it is for advanced usage only in my mind).
a simple way to create a "signal" that will fire a process at regular (or almost regular) interval is effectively to use receive block with timout (The timeout is limited to 65535 ms), for example:
on_tick_sec(Module,Function,Arglist,Period) ->
on_tick(Module,Function,Arglist,1000,Period,0).
on_tick_mn(Module,Function,Arglist,Period) ->
on_tick(Module,Function,Arglist,60000,Period,0).
on_tick_hr(Module,Function,Arglist,Period) ->
on_tick(Module,Function,Arglist,60000,Period*60,0).
on_tick(Module,Function,Arglist,TimeBase,Period,Period) ->
apply(Module,Function,Arglist),
on_tick(Module,Function,Arglist,TimeBase,Period,0);
on_tick(Module,Function,Arglist,TimeBase,Period,CountTimeBase) ->
receive
stop -> stopped
after TimeBase ->
on_tick(Module,Function,Arglist,TimeBase,Period,CountTimeBase+1)
end.
and usage:
1> Pid = spawn(util,on_tick_sec,[io,format,["hello~n"],5]).
<0.40.0>
hello
hello
hello
hello
2> Pid ! stop.
stop
3>
[edit]
The timer module is a standard gen_server running in a separate process. All the function in the timer module are public interfaces that execute a hidden gen_server:call or gen_server:cast to the timer server. This is a common usage to hide the internal of a server and allow further evolutions without impact on existing applications.
The server uses internally a table (ets) to store all the actions it has to do along with each timer reference and it uses its own function to be awaken when needed (at the end, the VM must take care of this ?).
So you can hibernate a process without any effect on the timer server behavior. The hibernation mechanism is
tricky, see documentation at hibernate/3 definition, you will see that yo have to "rebuild" the context by yourself since everything was removed from the process context, and a tuple(Module,Function,Arguments} is stored by the system to restart your process when needed.
cost some time in garbage collecting and process restart
It is why I said that it is really an advance feature that need good reason to be used.
There is also erlang:hibernate/3 that puts a process in "deep sleep", minimizing memory usage for it.

To child_process fork or not to fork for I/O tasks?

Does it make sense to use child_process fork for long running (15 - 30 seconds) I/O tasks such as fetching a feed and saving it to a db?
The context for this question is for an express route, and I need to mention that a status response is sent to the browser early when the feed url has been validated. After the status response has been sent the fetching and saving of the feed items continues and can obviously take a bit of time (10-30 sec). Should this second part be forked with a child process?
I have read contradictory posts (not on SO) about the I/O efficiency of node with/out forking the job to a background process, so I wanted to have a clear response to this. Does it make sense to fork I/O tasks (not CPU intensive tasks per se which I reckon is a separate question)
In general cases, Node is great for handling I/O. Due to the Event driven architecture, as soon as an I/O intensive action leaves Node (or any I/O action, really), Node forgets about that action until said action is finished (or has an error). The returning Event then goes back into the single-threaded Node process.
Take for example a remote DB and an intensive query. Even if the DB server takes seconds to query and return the results, the Node process was only responsible for building the query (a string?), and putting said query on TCP socket. The transferring of data on the socket doesn't even take up the Node process! Then, Node cares nothing about the request until the returning data has finished coming across the socket. (There could be some processing you don't see in your DB package, like when a RDBMS result is converted into JSON).
There might be corner cases to this and those you will have to look out for . . . if they ever come up. A huge majority of the time, Node will handle I/O very well. (Post some links to said articles, in your question or as comments under this answer.)
Forking child processes is typically reserved for high CPU tasks that would slow down the main event loop. There could be other reasons, but "in general."

Node.js async parallel - what consequences are?

There is code,
async.series(tasks, function (err) {
return callback ({message: 'tasks execution error', error: err});
});
where, tasks is array of functions, each of it peforms HTTP request (using request module) and calling MongoDB API to store the data (to MongoHQ instance).
With my current input, (~200 task to execute), it takes
[normal mode] collection cycle: 1356.843 sec. (22.61405 mins.)
But simply trying change from series to parallel, it gives magnificent benefit. The almost same amount of tasks run in ~30 secs instead of ~23 mins.
But, knowing that nothing is for free, I'm trying to understand what the consequences of that change? Can I tell that number of open sockets will be much higher, more memory consumption, more hit to DB servers?
Machine that I run the code is only 1GB of RAM Ubuntu, so I so that app hangs there one time, can it be caused by lacking of resources?
Your intuition is correct that the parallelism doesn't come for free, but you certainly may be able to pay for it.
Using a load testing module (or collection of modules) like nodeload, you can quantify how this parallel operation is affecting your server to determine if it is acceptable.
Async.parallelLimit can be a good way of limiting server load if you need to, but first it is important to discover if limiting is necessary. Testing explicitly is the best way to discover the limits of your system (eachLimit has a different signature, but could be used as well).
Beyond this, common pitfalls using async.parallel include wanting more complicated control flow than that function offers (which, from your description doesn't seem to apply) and using parallel on too large of a collection naively (which, say, may cause you to bump into your system's file descriptor limit if you are writing many files). With your ~200 request and save operations on 1GB RAM, I would imagine you would be fine as long as you aren't doing much massaging in the event handlers, but if you are experiencing server hangs, parallelLimit could be a good way out.
Again, testing is the best way to figure these things out.
I would point out that async.parallel executes multiple functions concurrently not (completely) parallely. It is more like virtual parallelism.
Executing concurrently is like running different programs on a single CPU core, via multitasking/scheduling. True parallel execution would be running different program on each core of multi-core CPU. This is important as node.js has single-threaded architecture.
The best thing about node is that you don't have to worry about I/O. It handles I/O very efficiently.
In your case you are storing data to MongoDB, is mostly I/O. So running them parallely will use up your network bandwidth and if reading/writing from disk then disk bandwidth too. Your server will not hang because of CPU overload.
The consequence of this would be that if you overburden your server, your requests may fail. You may get EMFILE error (Too many open files). Each socket counts as a file. Usually connections are pooled, meaning to establish connection a socket is picked from the pool and when finished return to the pool. You can increase the file descriptor with ulimit -n xxxx.
You may also get socket errors when overburdened like ECONNRESET(Error: socket hang up), ECONNREFUSED or ETIMEDOUT. So handle them with properly. Also check the maximum number of simultaneous connections for mongoDB server too.
Finally the server can hangup because of garbage collection. Garbage collection kicks in after your memory increases to a certain point, then runs periodically after some time. The max heap memory V8 can have is around 1.5 GB, so expect GC to run frequently if its memory is high. Node will crash with process out of memory if asking for more, than that limit. So fix the memory leaks in your program. You can look at these tools.
The main downside you'll see here is a spike in database server load. That may or may not be okay depending on your setup.
If your database server is a shared resource then you will probably want to limit the parallel requests by using async.eachLimit instead.
you'll realize the difference if multiple users connect:
in this case the processor can handle multiple operations
asynch tries to run several operations of multiple users relative equal
T = task
U = user
(T1.U1 = task 1 of user 1)
T1.U1 => T1.U2 => T2.U1 => T8.U3 => T2.U2 => etc
this is the oposite of atomicy (so maybe watch for atomicy on special db operations - but thats another topic)
so maybe it is faster to use:
T2.U1 before T1.U1
- this is no problem until
T2.U1 is based on T1.U1
- this is preventable by using callbacks/ or therefore are callbacks
...hope this is what you wanted to know... its a bit late here

How to pause a php script launched with crontab?

I have a PHP scraper running every night on a very large site. Crontab launches the script at 2am and pkill it at 7am. Now I am concerned that brutally killing the script might result in data loss. Let's say that crontab calls the script off while the script is busy writing my scraped data into the database, then the next day the database will refuse that last/first record because it is already present (even if not completely).
Is there any way I can freeze the script with crontab? (That is, without adding a sleep() to my script)
Let's say that crontab calls the script off while the script is busy writing my scraped data into the database
That would be a problem, since you will run into some transaction timeout or something, if you stop your process externally. A better way would be to let the script halt/pause on its own. You could for example define some marker file that is checked by the script periodically so that the script can halt/pause in a controlled way.
Having one large cronjob that can't be interrupted is usually a sign of bad design for a number of reasons.
Most notably, you can't interrupt the run for no reason whatsoever or you'll end up with corrupted data. This can become a big problem in case you have an unexpected power loss or server crash.
Also, it doesn't scale. If you need to process more data, you can't scale it to multiple servers. If you have run times of a few hours now, you may end up exhausting a complete server very soon.
I would recommend to seriously rethink the functionality of this cronjob and restructure it so you have a number of smaller tasks that are queued up somewhere. (It can even be the database.) You could then mask the SIGINT and SIGTERM signals when processing a single task and check the received signals in between tasks. This will allow you to notify the process using either of the aforementioned and have it shut down gracefully.
That being said, things do break and servers do crash. I also urge you to work out plans for data recovery in case the cronjob breaks down while working on something.

Resources