I have just started to understand how node.js event loop and microservices works and I was wondering if microservices can prevent the main thread of node.js application from blocking. What I mean is that we can run synchronous code on a different microservice which can send the response back when done and we can scale only that instance of microservice.
Is my understanding correct or please let me know if I got something wrong?
I think you're mixing up two concepts.
Microservices are relatively small, loosely-coupled services written in whatever language. For example if I work at BigEcommerceCompany I might have a variety of microservices written in a variety of technologies to manage, such as an auth service, cart service, payments service, reviews service, etc., and they might all be in the same language or all be in different languages.
Node's event loop is single threaded, but also has a worker pool that can be used for work without blocking the event loop, and Node can also be clustered with the build in cluster module (or various wrappers) across available CPUs. An example of a blocking function call in Node would be child_process.spawnSync; an example of a non-blocking call would be child_process.spawn. It's common when writing Node code to use a lot of Promises or callbacks to avoid blocking the event loop as much as possible.
The two concepts aren't really related except in that by writing small microservices it may be easier to find, isolate, and fix problems with Node performance.
Related
hystrixjs is an implementation of Hystrix for NodeJS.
But I'm not able to understand how it works in a single threaded js. How does it handle timing out the task?
The documentation talks about that a bit
Since this library targets nodejs application, it is by far not as
complex as the java implementation (remember... just one thread... no
concurrency, at least most of the time...). So how does it accomplish
the goals outlined above:
wraps all promise-based calls to external systems (or "dependencies") in a "Command"
Does that mean all the library is just run the promise and letting node handle the multi threading part(like network calls being handled by OS threads)?
I am writing payroll management web application in nodejs for my organisation. In many cases application shall involve cpu intensive mathematical calculation for calculating the figures and that too with many users trying to do this simulatenously.
If i plainly write the logic (setting aside the fact that i already did my best from algorithm and data structure point of view to contain the complexity) it will run synchronously blocking the event loop and make request, response slow.
How to resolve this scenario? What are the possible options to do this asynchronously? I also want to mention that this calculation stuff can be let to run in the background and later i can choose to tell user via notification about the status. I have searched for the solution all over this places and i found some solutions but only in theory & i haven't tested them all by implementing. Mentioning below:
Clustering the node server
Use worker threads
Use an alternate server and do some load balancing.
Use a message queue and couple it with worker thread to do backgound tasks.
Can someone suggest me some tried and battle tested advice on this scenario? and also some tutorial links associated with that.
You might wanna try web workers,easy to use and documented.
https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers
I think the question is pretty explicit. JavaScript is single threaded and NodeJS still achieves incredible performances. We could think obvious that multi-threading would take NodeJS performances further, but it might be wrong in some cases.
For example, I'm currently building a starter project using NextJS. I wonder if handling each request in a separate thread would be worth it.
Thank you!
As far as I know in production mode nodeJs "usually" used as:
nginx server (used as security layer and as HTTPS proxy)
number of child NodeJs processes (amount === number of cores)
That means that all cores are used,
request is processed by single core,
server processes several requests at once
=== UPDATE ===
If you want to divide single request processing into several threads - then just remember that cross-process communication is expensive in NodeJS, and you need to delegate huge tasks to other threads/webworkers
If you see the need to split app into several threads - consider using microservices architecture, e.g. using http://senecajs.org/
I am thinking of creating my own simple load test, where I can hit my website with multiple requests (like 100-1000 concurrent users) to see how it performs. I want to try Node.js out, but I don't know if it is the wrong technology for the job, since Node.js don't use threads?
Can I with the async model that Node.js uses, simulate the many user requests, or would that be more appropiate to do in another language like Ruby/.NET/Python?
Node.js ought to be perfect for the task. I do this at work. The one crucial piece that you will have to change is the http socket pool. The following code snipped will disable pooling entirely, letting you starve your Node.js process if you want to.
var http = require('http');
var req = http.request(..., agent: false)
You can read about this more at the http.Agent documentation.
Your concern about threads is astute, but even if you hit that limit (Node is very good at keeping your resources efficient) the solution is simple: start multiple instances (processes) of your load test. As it is, you may have to use multiple machines entirely to correctly simulate load.
In any case, you will not win automatically by using Ruby or Python for this. Asynchronous programming is ideal for I/O and network-bound tasks, and Node excels at this. Similarly, while Ruby and Python have third-party asynchronous frameworks, they're by definition more obscure than the standard asynchronous framework given in Node.
Node can fire off pretty much as many requests as you want it to (though you may have to change the defaults for http:Agent). You're more likely to be limited by what your OS can do than by anything inherent in node (and of course such limitations will apply in any other language you use).
It's simple to create load tests with nodeload.
I'm writing a server, and decided to split up the work between different processes running node.js, because I heard node.js was single threaded and figured this would parallize better. The application is going to be a game. I have one process serving html pages, and then other processes dealing with the communication between clients playing the game. The clients will be placed into "rooms" and then use sockets to talk to each other relayed through the server. The problem I have is that the html server needs to be aware of how full the different rooms are to place people correctly. The socket servers need to update this information so that an accurate representation of the various rooms is maintained. So, as far as I see it, the html server and the room servers need to share some objects in memory. I am planning to run it on one (multicore) machine. Does anyone know of an easy way to do this? Any help would be greatly appreciated
Node currently doesn't support shared memory directly, and that's a reflection of JavaScript's complete lack of semantics or support for threading/shared memory handling.
With node 0.7, only recently usable even experimentally, the ability to run multiple event loops and JS contexts in a single process has become a reality (utilizing V8's concept of isolates and large changes to libuv to allow multiple event loops per process). In this case it's possible, but still not directly supported or easy, to have some kind of shared memory. In order to do that you'd need to use a Buffer or ArrayBuffer (both which represent a chunk of memory outside of JavaScript's heap but accessible from it in a limited manner) and then some way to share a pointer to the underlying V8 representation of the foreign object. I know it can be done from a minimal native node module but I'm not sure if it's possible from JS alone yet.
Regardless, the scenario you described is best fulfilled by simply using child_process.fork and sending the (seemingly minimal) amount of data through the communication channel provided (uses serialization).
http://nodejs.org/docs/latest/api/child_processes.html
Edit: it'd be possible from JS alone assuming you used node-ffi to bridge the gap.
You may want to try using a database like Redis for this. You can have a process subscribed to a channel listening new connections and publishing from the web server every time you need.
You can also have multiple processes waiting for users and use a list and BRPOP to subscribe to wait for players.
Sounds like you want to not do that.
Serving and message-passing are both IO-bound, which Node is very good at doing with a single thread. If you need long-running calculations about those messages, those might be good for doing separately, but even so, you might be surprised at how well you do with a single thread.
If not, look into Workers.
zeromq is also becomming quite popular as a process comm method. Might be worth a look. http://www.zeromq.org/ and https://github.com/JustinTulloss/zeromq.node