What is the sleep time between ticks in nodejs - node.js

Revised Question
I would like to know how the event loop of nodejs (whatever underlying implementation it is, may it be v8, libuv, libev) loops without exhausting the CPU. As the code example below shows, a sleep call is inserted in order to "free" the CPU and prevent the while loop using up the CPU. Since someone has already pointed out that it is not the case, then I would like to know what mechanism is employed in nodejs (or its underlying libraries) for such purpose?
Linking to relative sections of the source code is welcome. Thanks.
Original Question
I am asking about the nodejs internals: I would like to know if there is any sleep time between the ticks in nodejs' event loop.
In other words, I assume nodejs internals look like the code below, I would like to know what is the value of sometime, if any.
while(true) {
for(event in queue) handleEvent(event);
sleep(sometime);
}
I made such assumption because I believe there must exist some kind of sleeping such that the while loop will not exhaust the CPU.

No there is no sleep, because that would be blocking. This answer has a lot of details about the Node.js internals that are relevant to your question.

Related

How to know what blocks NodeJS from closing

I have code that runs on NodeJS which involves intervals, sockets, and other async things.
Sometimes when it should close, it hangs forever, presumably since somewhere under some circumstances, I forget to clear an interval, close a socket, or something else.
Is there a way to get the currently active timers, and other such runtime information? Or inspect in any kind of way what blocks the exit?
Found this package https://www.npmjs.com/package/wtfnode from a related question to this one (https://stackoverflow.com/a/38471228/2503048). Oddly enough I couldn't find this information when googling. It should probably answer my question. Mostly the part about process._getActiveHandles().

Using threadsafe initialization in a JRuby gem

Wanting to be sure we're using the correct synchronization (and no more than necessary) when writing threadsafe code in JRuby; specifically, in a Puma instantiated Rails app.
UPDATE: Extensively re-edited this question, to be very clear and use latest code we are implementing. This code uses the atomic gem written by #headius (Charles Nutter) for JRuby, but not sure it is totally necessary, or in which ways it's necessary, for what we're trying to do here.
Here's what we've got, is this overkill (meaning, are we over/uber-engineering this), or perhaps incorrect?
ourgem.rb:
require 'atomic' # gem from #headius
SUPPORTED_SERVICES = %w(serviceABC anotherSvc andSoOnSvc).freeze
module Foo
def self.included(cls)
cls.extend(ClassMethods)
cls.send :__setup
end
module ClassMethods
def get(service_name, method_name, *args)
__cached_client(service_name).send(method_name.to_sym, *args)
# we also capture exceptions here, but leaving those out for brevity
end
private
def __client(service_name)
# obtain and return a client handle for the given service_name
# we definitely want to cache the value returned from this method
# **AND**
# it is a requirement that this method ONLY be called *once PER service_name*.
end
def __cached_client(service_name)
##_clients.value[service_name]
end
def __setup
##_clients = Atomic.new({})
##_clients.update do |current_service|
SUPPORTED_SERVICES.inject(Atomic.new({}).value) do |memo, service_name|
if current_services[service_name]
current_services[service_name]
else
memo.merge({service_name => __client(service_name)})
end
end
end
end
end
end
client.rb:
require 'ourgem'
class GetStuffFromServiceABC
include Foo
def self.get_some_stuff
result = get('serviceABC', 'method_bar', 'arg1', 'arg2', 'arg3')
puts result
end
end
Summary of the above: we have ##_clients (a mutable class variable holding a Hash of clients) which we only want to populate ONCE for all available services, which are keyed on service_name.
Since the hash is in a class variable (and hence threadsafe?), are we guaranteed that the call to __client will not get run more than once per service name (even if Puma is instantiating multiple threads with this class to service all the requests from different users)? If the class variable is threadsafe (in that way), then perhaps the Atomic.new({}) is unnecessary?
Also, should we be using an Atomic.new(ThreadSafe::Hash) instead? Or again, is that not necessary?
If not (meaning: you think we do need the Atomic.news at least, and perhaps also the ThreadSafe::Hash), then why couldn't a second (or third, etc.) thread interrupt between the Atomic.new(nil) and the ##_clients.update do ... meaning the Atomic.news from EACH thread will EACH create two (separate) objects?
Thanks for any thread-safety advice, we don't see any questions on SO that directly address this issue.
Just a friendly piece of advice, before I attempt to tackle the issues you raise here:
This question, and the accompanying code, strongly suggests that you don't (yet) have a solid grasp of the issues involved in writing multi-threaded code. I encourage you to think twice before deciding to write a multi-threaded app for production use. Why do you actually want to use Puma? Is it for performance? Will your app handle many long-running, I/O-bound requests (like uploading/downloading large files) at the same time? Or (like many apps) will it primarily handle short, CPU-bound requests?
If the answer is "short/CPU-bound", then you have little to gain from using Puma. Multiple single-threaded server processes would be better. Memory consumption will be higher, but you will keep your sanity. Writing correct multi-threaded code is devilishly hard, and even experts make mistakes. If your business success, job security, etc. depends on that multi-threaded code working and working right, you are going to cause yourself a lot of unnecessary pain and mental anguish.
That aside, let me try to unravel some of the issues raised in your question. There is so much to say that it's hard to know where to start. You may want to pour yourself a cold or hot beverage of your choice before sitting down to read this treatise:
When you talk about writing "thread-safe" code, you need to be clear about what you mean. In most cases, "thread-safe" code means code which doesn't concurrently modify mutable data in a way which could cause data corruption. (What a mouthful!) That could mean that the code doesn't allow concurrent modification of mutable data at all (using locks), or that it does allow concurrent modification, but makes sure that it doesn't corrupt data (probably using atomic operations and a touch of black magic).
Note that when your threads are only reading data, not modifying it, or when working with shared stateless objects, there is no question of "thread safety".
Another definition of "thread-safe", which probably applies better to your situation, has to do with operations which affect the outside world (basically I/O). You may want some operations to only happen once, or to happen in a specific order. If the code which performs those operations runs on multiple threads, they could happen more times than desired, or in a different order than desired, unless you do something to prevent that.
It appears that your __setup method is only called when ourgem.rb is first loaded. As far as I know, even if multiple threads require the same file at the same time, MRI will only ever let a single thread load the file. I don't know whether JRuby is the same. But in any case, if your source files are being loaded more than once, that is symptomatic of a deeper problem. They should only be loaded once, on a single thread. If your app handles requests on multiple threads, those threads should be started up after the application has loaded, not before. This is the only sane way to do things.
Assuming that everything is sane, ourgem.rb will be loaded using a single thread. That means __setup will only ever be called by a single thread. In that case, there is no question of thread safety at all to worry about (as far as initialization of your "client cache" goes).
Even if __setup was to be called concurrently by multiple threads, your atomic code won't do what you think it does. First of all, you use Atomic.new({}).value. This wraps a Hash in an atomic reference, then unwraps it so you just get back the Hash. It's a no-op. You could just write {} instead.
Second, your Atomic#update call will not prevent the initialization code from running more than once. To understand this, you need to know what Atomic actually does.
Let me pull out the old, tired "increment a shared counter" example. Imagine the following code is running on 2 threads:
i += 1
We all know what can go wrong here. You may end up with the following sequence of events:
Thread A reads i and increments it.
Thread B reads i and increments it.
Thread A writes its incremented value back to i.
Thread B writes its incremented value back to i.
So we lose an update, right? But what if we store the counter value in an atomic reference, and use Atomic#update? Then it would be like this:
Thread A reads i and increments it.
Thread B reads i and increments it.
Thread A tries to write its incremented value back to i, and succeeds.
Thread B tries to write its incremented value back to i, and fails, because the value has already changed.
Thread B reads i again and increments it.
Thread B tries to write its incremented value back to i again, and succeeds this time.
Do you get the idea? Atomic never stops 2 threads from running the same code at the same time. What it does do, is force some threads to retry the #update block when necessary, to avoid lost updates.
If your goal is to ensure that your initialization code will only ever run once, using Atomic is a very inappropriate choice. If anything, it could make it run more times, rather than less (due to retries).
So, that is that. But if you're still with me here, I am actually more concerned about whether your "client" objects are themselves thread-safe. Do they have any mutable state? Since you are caching them, it seems that initializing them must be slow. Be that as it may, if you use locks to make them thread-safe, you may not be gaining anything from caching and sharing them between threads. Your "multi-threaded" server may be reduced to what is effectively an unnecessarily complicated, single-threaded server.
If the client objects have no mutable state, good for you. You can be "free and easy" and share them between threads with no problems. If they do have mutable state, but initializing them is slow, then I would recommend caching one object per thread, so they are never shared. Thread[] is your friend there.

How do I make a non-IO operation synchronous vs. asynchronous in node.js?

I know the title sounds like a dupe of a dozen other questions, and it may well be. However, I've read those dozen questions, and Googled around for awhile, and found nothing that answers these questions to my satisfaction.
This might be because nobody has answered it properly, in which case you should vote me up.
This might be because I'm dumb and didn't understand the other answers (much more likely), in which case you should vote me down.
Context:
I know that IO operations in Node.js are detected and made to run asynchronously by default. My question is about non-IO operations that still might block/run for a long time.
Say I have a function blockingfunction with a for loop that does addition or whatnot (pure CPU cycles, no IO), and a lot of it. It takes a minute or more to run.
Say I want this function to run whenever someone makes a certain request to my server.
Question:
Obviously, if I explicitly invoke this loop at the outer level in my code, everything will block until it completes.
Most suggestions I've read suggest pushing it off into the future by starting all of my other handlers/servers etc. first, and deferring invocation of the function via process.nextTick or setTimeout(blockingfunction, 0).
But won't blockingfunction1 then just block on the next spin around the execution loop? I may be wrong, but it seems like doing that would start all of my other stuff without blocking the app, but then the first time someone made the request that results in blockingfunction being called, everything would block for as long as it took to complete.
Does putting blockingfunction inside a setTimeout or process.nextTick call somehow make it coexist with future operations without blocking them?
If not, is there a way to make blockingfunction do that without rewriting it?
How do others handle this problem? A lot of the answers I've seen are to the tune of "just trust your CPU-intensive things to be fast, they will be", but this doesn't satisfy.
Absent threading (where I can be guaranteed that the execution of blockingfunction will be interleaved with the execution of whatever else is going on), should I re-write CPU-intensive/time consuming loops to use process.nextTick to perform a fixed, guaranteed-fast number of iterations per tick?
Yes, you are correct. If you defer your function until the next tick, it will just block in that tick rather than the current one.
Unfortunately, there is no magic here that solves this for you. While it is possible to fire up that function in another process, it might not be worth the hassle, depending on what you're doing.
I recommend re-writing your function in such a way that work happens for a bit, and then continues on the next tick. Node ticks are very efficient... you could call them every iteration of a decent sized loop if needed, without a whole ton of overhead. Of course, you would have to profile it in your code to see what the impact is.
Yes, a blocking function will keep blocking even if you run it process.nextTick.
Some options:
If it truly takes a while, then perhaps it should be spun out to a queue where you can have a dedicated worker process handle it.
1a. Node.js has a child-process flavor specifically for forking other node.js files with a built in communication channel. So e.g. you can create one (or several) thread that handles these requests in order, then responds and hits the callback. See: http://nodejs.org/api/child_process.html#child_process_child_process_fork_modulepath_args_options
You can break up the blockingFunction into chunks that run in a loop. Have it call every X iterations with process.nextTick to make way for other events to be handled.

node js : there are two setInterval()

I'm Korean. My English skill too low.
In NODE.JS, there are two setInterval().
Of course, nodejs is single thread.
but, I worry about that each setInterval handles same value(or array).
To tell the truth, my circumstance has network and setInterval().
how can I controll the value. Or my worry is nothing?
You want to consider rewording this, I'm having trouble understanding what you are asking (especially in relation to network/threads), but I'm guessing you want to look into what the nodejs event loop is:
http://blog.mixu.net/2011/02/01/understanding-the-node-js-event-loop/
JavaScript runs code in what I like to call turns.
During a turn, the code that is running has full and exclusive access to all variables and the values bound to them. As no other code is or can be running, you don't have to worry about locking.
You can ignore the text below the line.
Note that although this doesn't matter in this case, if you have a process that completes over multiple turns, you should be aware that other code may have taken turns between those turns. Each turn is atomic, and there are ways to make multi-turn processes atomic but they are too complex to explain here.
Note that the concept of a turn comes from the E lang but fits so nicely in JavaScript.
only one thread is allocated to user-level
user level 에서는 오직 1 thread 만 할당 되어있다 .
so, you don't have to worry about thread confliction. or IPC
즉 thread confliction 은 고민할 필요가 없다는 얘기
if your question is not regarding this ,
then you can handle every other case easily by your application-level programming
기타 상황은 응용프로그램 레벨에서 조치 하면 될것 같음.
i'm newbie to here,
so i don't know whether language other than english is permitted or not ....

Does use of recursive process.nexttick let other processes or threads work?

Technically when we execute the following code(recursive process.nexttick), the CPU usage would get to 100% or near. The question is the imagining that I'm running on a machine with one CPU and there's another process of node HTTP server working, how does it affect it?
Does the thread doing recursive process.nexttick let the HTTP server work at all?
If we have two threads of recursive process.nexttick, do they both get 50% share?
Since I don't know any machine with one core cannot try it. And since my understanding of time sharing of the CPU between threads is limited in this case, I don't how should try it with machines that have 4 cores of CPU.
function interval(){
process.nextTick(function(){
someSmallSyncCode();
interval();
})
}
Thanks
To understand whats going on here, you have to understand a few things about node's event loop as well as the OS and CPU.
First of all, lets understand this code better. You call this recursive, but is it?
In recursion we normally think of nested call stacks and then when the computation is done (reaching a base case), the stack "unwinds" back to the point of where our recursive function was called.
While this is a method that calls itself (indirectly through a callback), the event loop skews what is actually going on.
process.nextTick takes a function as a callback and puts it first at the list of stuff to be done on the next go-around of the event loop. This callback is then executed and when it is done, you once again register the same callback. Essentially, the key difference between this and true recursion is that our call stack never gets more than one call deep. We never "unwind" the stack, we just have lots of small short stacks in succession.
Okay, so why does this matter?
When we understand the event loop better and what is really going on, we can better understand how system resources are used. By using process.nextTick in this fashion, you are assuring there is ALWAYS something to do on the event loop, which is why you get high cpu usage (but you knew that already). Now, if we were to suppose that your HTTP server were to run in the SAME process as the script, such as below
function interval(){
process.nextTick(doIntervalStuff)
}
function doIntervalStuff() {
someSmallSyncCode();
interval();
}
http.createServer(function (req, res) {
doHTTPStuff()
}).listen(1337, "127.0.0.1");
then how does the CPU usage get split up between the two different parts of the program? Well thats hard to say, but if we understand the event loop, we can at least guess.
Since we use process.nextTick, the doIntervalStuff function will be run every time at the "start" of the event loop, however, if there is something to be done for the HTTP server (like handle a connection) then we know that will get done before the next time the event loop starts, and remember, due to the evented nature of node, this could be handling any number of connections on one iteration of the event loop. What this implies, is that at least in theory, each function in the process gets what it "needs" as far as CPU usage, and then the process.nextTick functions uses the rest. While this isn't exactly true (for example, your bit of blocking code would mess this up), it is a good enough model to think about.
Okay now (finally) on to your real question, what about separate processes?
Interestingly enough, the OS and CPU are often times also very "evented" in nature. Whenever a processes wants to do something (like in the case of node, start an iteration of the event loop), it makes a request to the OS to be handled, the OS then shoves this job in a ready queue (which is prioritized) and it executes when the CPU scheduler decides to get around to it. This is once again is an oversimplified model, but the core concept to take away is that much like in node's event loop, each process gets what it "needs" and then a process like your node app tries to execute whenever possible by filling in the gaps.
So when your node processes says its taking 100% of cpu, that isn't accurate, otherwise, nothing else would ever be getting done and the system would crash. Essentially, its taking up all the CPU it can but the OS still determines other stuff to slip in.
If you were to add a second node process that did the same process.nextTick, the OS would try to accommodate both processes and, depending on the amount of work to be done on each node process's event loop, the OS would split up the work accordingly (at least in theory, but in reality would probably just lead to everything slowing down and system instability).
Once again, this is very oversimplified, but hopefully it gives you an idea of what is going on. That being said, I wouldn't recommend using process.nextTick unless you know you need it, if doing something every 5 ms is acceptable, using a setTimeout instead of process.nextTick will save oodles on cpu usage.
Hope that answers your question :D
No. You don't have to artificially pause your processes to let others do their work, your operating system has mechanisms for that. In fact, using process.nextTick in this way will slow your computer down because it has a lot of overhead.

Resources