So the first app that people usually build with SocketIO and Node is usually a chatting app. This chatting app basically has 1 Node server that will broadcast to multiple clients. In the Node code, you would have something like.
//Psuedocode
for(client in clients){
if(client != messageSender){
user.send(message);
}
}
This is great for a low number of users, but I see a problem with this. First of all, there is a single point of failure which is the Node server. Second of all, the app will slow down as the number of clients grow. What is there to do then when we reach this bottleneck? Is there an architecture (horizontal/vertical scaling) that can be used to alleviate this problem?
For that "one day" when your chat app needs multiple, fault-tolerant node servers, and you want to use socket.io to cross communicate between the server and the client, there is a node.js module that fits the bill.
https://github.com/hookio/hook.io
It's basically an event emitting framework to cross communicate between multiple "things" -- such as multiple node servers.
It's relatively complicated to use, compared to most modules, which is understandable since this is a complex problem to solve.
That being said, you'd probably have to have a few thousand simultaneous users and lots of other problems before you begin to have problems with this.
Another thing you can do, is try to develop your application in a way so that if a connection is lost (which happens all the time anyway), eg. server goes down, client has network issues (eg. mobile user), etc, your application should be able to handle that and recover from such issues gracefully.
Since Node.js has a single event-loop thread, this single point of failure is written into its DNA. Even reloading a server after code changes require this thread to be stopped.
There are however a lot of tools available to handle such failures gracefully. You could use forever; a simple CLI tool for ensuring that a given script runs continuously. Other options include distribute and up. Distribute is a load balancing middleware for Node. Up builds on top of Distribute to offer zero downtime reloads using either a JavaScript API or command line interface:
Further reading I find you just need to use Redis Store with Socket.io to maintain connection references between two or more processes/ servers. These options have already been discussed extensively here and here.
There's also the option of using socket.io-clusterhub if you don't intend to use the Redis store.
Related
I got several applications working with Node on the back-end and React on the front-end, it works great, I do axios get and post requests from React to Express and I get data back and forth, then on production I use pm2 to get everything up and running.
My question is when two users access the same application at the same time, how does Node treat this, as two separated instances or just one?.
I am considering using socket.io to be able to notify the front-end on changes that are happening on Node, and I wonder if those notifications will be emitted from the back-end no matter what another user might be doing or not.
Thanks.
As you have probably heard node.js is addressed as a "single-threaded" runtime. This is only partially true. Even though node runs on a single thread of your processor it runs the majority of its tasks in a thread pool which can process up to 4 tasks at the same time.
If you want to know about this you might want to look into the node event loop which describes the steps node goes through on each "tick".
So as you see node can often not process one but up to 4 actions on each loop cycle. But there is more, to solve the performance issues that might occur on big applications you can run node on a cluster mode. This allows you to extend the thread pool and add multiple node instances and therefore handle high demand efficiently.
One note to your socket.io question. As you see a high demand of tasks gets queued until it is handled in the node event loop, so sometimes you need to wait. Fortunatly we are in a race of big tech to create the fastest JS-runtime so this thing is pretty fast.
This is kind of a multi-tiered question in which my end goal is to establish the best way to setup my server which will be hosting a website as well as a service (using Socket.io) for an iOS (and eventually an Android) app. Both the app service and the website are going to be written in node.js as I need high concurrency and scaling for the app server and I figured whilst I'm at it may as well do the website in node because it wouldn't be that much different in terms of performance than something different like Apache (from my understanding).
Also the website has a lower priority than the app service, the app service should receive significantly higher traffic than the website (but in the long run this may change). Money isn't my greatest priority here, but it is a limiting factor, I feel that having a service that has 99.9% uptime (as 100% uptime appears to be virtually impossible in the long run) is more important than saving money at the compromise of having more down time.
Firstly I understand that having one node process per cpu core is the best way to fully utilise a multi-core cpu. I now understand after researching that running more than one per core is inefficient due to the fact that the cpu has to do context switching between the multiple processes. How come then whenever I see code posted on how to use the in-built cluster module in node.js, the master worker creates a number of workers equal to the number of cores because that would mean you would have 9 processes on an 8 core machine (1 master process and 8 worker processes)? Is this because the master process usually is there just to restart worker processes if they crash or end and therefore does so little it doesnt matter that it shares a cpu core with another node process?
If this is the case then, I am planning to have the workers handle providing the app service and have the master worker handle the workers but also host a webpage which would provide statistical information on the server's state and all other relevant information (like number of clients connected, worker restart count, error logs etc). Is this a bad idea? Would it be better to have this webpage running on a separate worker and just leave the master worker to handle the workers?
So overall I wanted to have the following elements; a service to handle the request from the app (my main point of traffic), a website (fairly simple, a couple of pages and a registration form), an SQL database to store user information, a webpage (probably locally hosted on the server machine) which only I can access that hosts information about the server (users connected, worker restarts, server logs, other useful information etc) and apparently nginx would be a good idea where I'm handling multiple node processes accepting connection from the app. After doing research I've also found that it would probably be best to host on a VPS initially. I was thinking at first when the amount of traffic the app service would be receiving will most likely be fairly low, I could run all of those elements on one VPS. Or would it be best to have them running on seperate VPS's except for the website and the server status webpage which I could run on the same one? I guess this way if there is a hardware failure and something goes down, not everything does and I could run 2 instances of the app service on 2 different VPS's so if one goes down the other one is still functioning. Would this just be overkill? I doubt for a while I would need multiple app service instances to support the traffic load but it would help reduce the apparent down time for users.
Maybe this all depends on what I value more and have the time to do? A more complex server setup that costs more and maybe a little unnecessary but guarantees a consistent and reliable service, or a cheaper and simpler setup that may succumb to downtime due to coding errors and server hardware issues.
Also it's worth noting I've never had any real experience with production level servers so in some ways I've jumped in the deep end a little with this. I feel like I've come a long way in the past half a year and feel like I'm getting a fairly good grasp on what I need to do, I could just do with some advice from someone with experience that has an idea with what roadblocks I may come across along the way and whether I'm causing myself unnecessary problems with this kind of setup.
Any advice is greatly appreciated, thanks for taking the time to read my question.
I have two applications, a node.js app running on node-webkit, and a lua application. I would need to pass data between the two applications on regulars intervals, say every 5 to 15 seconds.
The node.js application is the one creating the data, and the lua application is the one consuming the data. The data only goes to one direction.
How should I do the data transfer. I would prefer json/xml for the data, but actually it can be in any other format as well. The data moved at a time is not large. Its just some ten parameters at a time.
My initial thought was to just make the node app act as server and serve the data via rest api, and the lua app just read the page with LuaSocket or such. But is there a better way to do the transfer, if both of the apps reside on same machine? Currently the lua app is running in Windows, but that could change.
My background is in web development, so I'm totally lost when it comes to sharing data between applications. I'm also new to lua. Thanks for any answers.
There are many ways to accomplish such task. I will describe two of them.
The first approach which I like most is using a Remote Queue such as Apache Kafka, Redis, RabbitMQ, or even Zookeeper for small data, alternatively store in a database. All these remote storage systems have very good Node.js modules and all of them can handle JSON and any other data type very well.
Unless this is just a mere test app, it is good to build such fault tolerance into your apps. In your case, imagine if the consumer Lua app goes down, or the opposite, Node.js producer app goes down. You don't want the failure on one app to affect another app. In production environment, it is best to isolate apps and tasks like this. Another advantage of this approach is that one day you may decide to rewrite your consumer in Node.js, Scala, etc. or have multiple consumers in different languages. This doesn't require your server to stop or change. It even doesn't have to know about any changes to the consumer.
So, your production server always pushes data to a remote data store/queue independently, and a consumer server reads then deletes the data from this remote store on its own pace.
If you used a database, you would read the new records, consume them, and once done, remove them from the database. This approach allows you to shutdown the consumer and producer apps independently for any reason like upgrade.
Another approach is to establish a direct network connection from producer server to a consumer server via a TCP. The producer server would be a client pushing data to the consumer server. This can be accomplished with the net build-in module if the apps are on different physical machines. But as you can see, this is less reliable solution because if the consumer goes down, the produce can no longer push the new data in which case you should think what you should do with it: discard or store somewhere. If store somewhere, you end up reimplementing the first approach explained above.
It is not uncommon to think about distributing the logic of an application between different servers whether because of scalability, security or any other arbitrary concern. In such a scenario it's important to have reliable channels of communication between the separate modules or applications.
A practical case could look like this:
(Server #1) You have a DB table filling up with tasks (in the form of table entries) that need to be processed.
(Server #2) You have an arbitrator that fetches these tasks one by one so as to handle them in some specific fashion.
(Server #3 -- #n) You have multiple worker applications that receive tasks from the arbitrator and return the results back to it.
Now imagine that everything is programmed with Node.js. You want the worker servers to be able to spawn when more resources are needed and be terminated when the processing load is low. When a worker node is created it has to connect back to the arbitrator to signal that it is ready to receive tasks.
What are the available options for communicating the worker nodes with the arbitrator such that the arbitrator can detect when a new worker node is connecting to it and data between both can start to flow. Or, in other words, how to go about creating reliable state-full channels of communication between two remote Node.js applications?
As much as this shouldn't turn into a battle of messaging technologies, another option is RabbitMQ. They have quick tutorials for both worker queues and remote procedure calls (rpc).
Although these tutorials are in python, they are still easy to follow though (and I believe a bit of googling will find you Node translations on github).
In your situation, Rabbit will be able to handle dispatching messages to particular workers, however I think you will have to write your scaling logic yourself.
zeromq is a good option for that.
I have a simple node.js server app built that I'm hoping to test out soon. It's single threaded and works fine without any child processing whatsoever. My problem is that the server box has multiple cores and the simplest way I can think to utilize them is by running multiple instances of the server app. However this would require them all to be on the same domain name and so some sort of request routing is required. I personally don't have much experience with servers in general and don't know if this is a task for node.js to perform or some other less complicated program (or more complicated.) If there is a node.js mechanism to solve this, for example, if one running instance can send incoming requests to the next instance, than how would I detect when this needs to happen? Transversely, if I use some other program how will it manage to detect when it needs to start talking to a new instance?
Node.js includes built-in support for managing a cluster of instances of your application to take advantage of multiple cores via the cluster module.