How does event-loop occur in Nodejs? [duplicate] - node.js

This question already has answers here:
In Node.js how does the event loop work? [closed]
(1 answer)
Nodejs Event Loop
(8 answers)
Closed 9 years ago.
Nodejs is known to be best for simplification by "single thread". As I know, this single thread handles requests through a event loop. I want to ask:
Is Nodejs really only have one thread only? For example, if there is a billion users on a website, the thread will loop through a billion times? Or in fact there are some "small threads" that the large single thread will use to do different stuffs?
Thank you!

Related

What is the best number of threads in parallel programs in java? [duplicate]

This question already has answers here:
How to get an ideal number of threads in parallel programs in Java?
(3 answers)
Closed 3 years ago.
What is the best number of threads in parallel programs in java?
NoT<= (noc) / (1 - bf)
NoT- Number of threads
noc- number of cores
Bf - block factor
For best performance
No of cores -1

Understanding node.js [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have started reading node.js. I have a few questions:
Is node better than multi-threading just because it saves us from caring about deadlocks and reduces thread creation overhead, or are there are other factors too? Node does use threads internally, so we can't say that it saves thread creation overhead, just that it is managed internally.
Why do we say that node is not good for multi-core processors? It creates threads internally, so it must be getting benefits of multi-core. Why do we say it is not good for CPU intensive applications? We can always fork new processes for CPU intensive tasks.
Are only functions with callback dispatched as threads or there are other cases too?
Non-blocking I/O can be achieved using threads too. A main thread may be always ready to receive new requests. So what is the benefit?
Correct.
Node.js does scale with cores, through child processes, clusters, among other things.
Callbacks are just a common convention developers use to implement asynchronous methods. There is no technical reason why you have to include them. You could, for example, have all your async methods use promises instead.
Everything node does could be accomplished with threads, but there is less code/overhead involved with node.js's asynchronous IO than there is with multi-threaded code. You do not, for example, need to create an instance of thread or runnable every time like you would in Java.

Does one Node.js thread block the other? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Using Node.js and one single CPU virtual instance, if I put a worker thread and a web thread on the same node, would one block the other? Would I require two CPUs for them to run perfectly in parallel?
Yes, one would block another if it is all synchronous code.
Since you only have one virtual CPU instance (assuming not hyperthreaded), at the core level, the CPU only takes instructions in a synchronous fashion: 1, then 2, then 3, then 4.
Therefor in theory, if one worker had something like this, it would block:
while (true) {
doSomething();
}
Disclaimer: I'm not sure whether the OS kernel would handle anything regarding blocking instructions.
However, Node.js runs all I/O in the event loop, along with tasks that you explicitly state to be ran in the event loop (process.nextTick(), setTimeout()...). The way the event loop works is explained well here, so I won't go into much detail - however, the only blocking part about Node.js is synchronous running code, like the example above.
So, long story short: since your web worker uses Node.js, and the http module is an async module, your web worker will not block. Since your worker thread also uses Node.js, assuming it executes code that is launched when an event occurs (a visit to your website, for example), it will not block.
To run synchronous code perfectly in parallel, you would need two CPUs. However, assuming it is asynchronous, it should work just fine on one CPU.

Best strategy to execute tasks with high branch divergency [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have a project written a few years ago that do compute N similar tasks in a row on a single CPU core.
These N tasks are completely independent so they could be computed in parallel.
However, the problem with these tasks is that the control flow inside each task differs much from one task to other, so the SIMT approach implemented in CUDA will more likely impede than help.
I came up with an idea to launch N blocks with 1 thread in each to break the warp dependency for threads.
Can anyone suggest a better way to optimise the computations in this situation, or point out possible pitfalls with my solution.
You are right with your comment what causes and what is caused by divergence of threads in a warp. However, launching configuration mentioned by you (1 thread in each block) totally diminishes potential of GPU. Threads in a warp/half-warp is the maximal unit of threads that is eventually executed in parallel on a single multiprocessor. So, having one thread in the block and having 32 these blocks is actually as having 32 threads in the warp with different paths. First case is even worse because number resident blocks per multiprocessors is quite limited (8 or 16, depending on compute capability).
Therefore, if you want to fully exploit potential of GPU, keep in mind Jack's comment and try to reorganize threads so that threads of a single warp would follow equal execution path.

Is this a sane architecture for a multi-user network server? (How much overhead does pipes-concurrency introduce?) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I currently have it so that there is one thread handling the accept loop, one main thread for doing all the stateful logic stuff, and then 2 threads per connected client. One client thread is handling the input pipeline and using pipes-concurrency to send messages to the main logic thread. The other client thread is handling the output pipeline, getting messages from the main logic thread and sending them to the client.
My reasoning for doing it this way is that the main logic thread can use worker threads to do pure computations on an immutable state, then do all the state changes at once and loop back around with the new state. That way I can make use of multiple CPUs without having to worry about the problems of concurrent state modifications.
Is the overhead of STM/pipes-concurrency small enough that this is a reasonable approach when I will end up with a couple thousand connected clients sending two or three messages per second each?
Haskell green threads are cheap enough that I would definitely recommend the approach of 2 threads per client. Without seeing details, I can't comment on whether STM will be a bottleneck or not, but that's going to depend on your implementation. STM can definitely handle that level of workload, assuming it's used correctly.

Resources