Connecting Node.js applications running in different servers - node.js

It is not uncommon to think about distributing the logic of an application between different servers whether because of scalability, security or any other arbitrary concern. In such a scenario it's important to have reliable channels of communication between the separate modules or applications.
A practical case could look like this:
(Server #1) You have a DB table filling up with tasks (in the form of table entries) that need to be processed.
(Server #2) You have an arbitrator that fetches these tasks one by one so as to handle them in some specific fashion.
(Server #3 -- #n) You have multiple worker applications that receive tasks from the arbitrator and return the results back to it.
Now imagine that everything is programmed with Node.js. You want the worker servers to be able to spawn when more resources are needed and be terminated when the processing load is low. When a worker node is created it has to connect back to the arbitrator to signal that it is ready to receive tasks.
What are the available options for communicating the worker nodes with the arbitrator such that the arbitrator can detect when a new worker node is connecting to it and data between both can start to flow. Or, in other words, how to go about creating reliable state-full channels of communication between two remote Node.js applications?

As much as this shouldn't turn into a battle of messaging technologies, another option is RabbitMQ. They have quick tutorials for both worker queues and remote procedure calls (rpc).
Although these tutorials are in python, they are still easy to follow though (and I believe a bit of googling will find you Node translations on github).
In your situation, Rabbit will be able to handle dispatching messages to particular workers, however I think you will have to write your scaling logic yourself.

zeromq is a good option for that.

Related

Sharing data between node.js app and lua app

I have two applications, a node.js app running on node-webkit, and a lua application. I would need to pass data between the two applications on regulars intervals, say every 5 to 15 seconds.
The node.js application is the one creating the data, and the lua application is the one consuming the data. The data only goes to one direction.
How should I do the data transfer. I would prefer json/xml for the data, but actually it can be in any other format as well. The data moved at a time is not large. Its just some ten parameters at a time.
My initial thought was to just make the node app act as server and serve the data via rest api, and the lua app just read the page with LuaSocket or such. But is there a better way to do the transfer, if both of the apps reside on same machine? Currently the lua app is running in Windows, but that could change.
My background is in web development, so I'm totally lost when it comes to sharing data between applications. I'm also new to lua. Thanks for any answers.
There are many ways to accomplish such task. I will describe two of them.
The first approach which I like most is using a Remote Queue such as Apache Kafka, Redis, RabbitMQ, or even Zookeeper for small data, alternatively store in a database. All these remote storage systems have very good Node.js modules and all of them can handle JSON and any other data type very well.
Unless this is just a mere test app, it is good to build such fault tolerance into your apps. In your case, imagine if the consumer Lua app goes down, or the opposite, Node.js producer app goes down. You don't want the failure on one app to affect another app. In production environment, it is best to isolate apps and tasks like this. Another advantage of this approach is that one day you may decide to rewrite your consumer in Node.js, Scala, etc. or have multiple consumers in different languages. This doesn't require your server to stop or change. It even doesn't have to know about any changes to the consumer.
So, your production server always pushes data to a remote data store/queue independently, and a consumer server reads then deletes the data from this remote store on its own pace.
If you used a database, you would read the new records, consume them, and once done, remove them from the database. This approach allows you to shutdown the consumer and producer apps independently for any reason like upgrade.
Another approach is to establish a direct network connection from producer server to a consumer server via a TCP. The producer server would be a client pushing data to the consumer server. This can be accomplished with the net build-in module if the apps are on different physical machines. But as you can see, this is less reliable solution because if the consumer goes down, the produce can no longer push the new data in which case you should think what you should do with it: discard or store somewhere. If store somewhere, you end up reimplementing the first approach explained above.

Faye clustering multiple nodes NodeJS

I am trying to make a pub/sub infra using faye (nodejs). I wish to know whether horizontal scaling would be possible or not.
One nodejs process will run on single core, so when people are talking about clustering, they talk about creating multiple processes on the same machine, sharing a port, and sharing data through redis.
Like this:
http://www.davidado.com/2013/12/18/using-node-js-cluster-with-socket-io-for-push-notifications/
Firstly, I don't understand how we make sure that each of the forked processes goes to a different core. If I fork 10 node servers on a machine with 4 cores, is it taken care that they are equally distributed?
What if I wish to add is a new machine, and thus scale it. I have not seen any such support anywhere. I am not sure if it is even possible to do it.
Let's say somehow multiple nodes are being used and there is some load balancer. But one client will connect to only one server process. So when a client C1 publishes on a channel on which a client C2 has subscribed, and C1 is connected to process P1 and C2 is connected to process P2, how will P1 publish the message to C2 when it doesn't have the connection?
This would probably be possible in case of a single machine, because the cluster module enables all processes to share the same port and the connections too.
I am fairly new to the web world, as well as nodejs and faye. Please enlighten me if there is something wrong in the question.
You are correct in thinking that the cluster module allows multiple cores to be used on a single machine. The cluster module allows the same application to be spawned multiple times whilst listening to the same port. The distribution amongst the cores is down to the operating system, so if you have 10 processes and 4 cores then the OS will figure out how best to distribute them (as long as they haven't been spawned with a set affinity). By default this shouldn't be a concern for you.
Load-balancing can be done through node too but that is separate from clustering. Instead you would have a separate application that would grab the load statistics on each running server and proxy the http request to the most appropriate server (using http-proxy as an example). A very primitive load balancer will send one request to each running server instance incrementally to give an even distribution.
The final point about sharing messages between all the instances assumes that there is a single point where all the messages are held. In the article you linked to they assume that there is only one server and all the processes share access to the redis instance. As they all access the same redis instance, all processes will be able to receive the same messages. If we're going to start thinking about multiple servers that are in different locations in the world that all have different message stores (i.e. their own redis instances) then we get into the domain of 'replication'. Some data stores are built with this in mind and redis is one of them. You end up with a 'master' set of data and a set of 'slaves' that will periodically update with the master and grab anything they are missing. It is important to note here that messages will not be sent in 'real-time' here unless you have a very intensive replication process.
In conclusion, developers go through this chain of scaling for their applications. The first is to make the application multi-process (the cluster module). The second is to have a load balancer that proxies the http request to the appropriate server that is running the multi-process application. The third is to replicate the datastores so that the servers can run independently but keep in sync with each other.

NodeJS + SocketIO: Scaling and preventing single point of failure

So the first app that people usually build with SocketIO and Node is usually a chatting app. This chatting app basically has 1 Node server that will broadcast to multiple clients. In the Node code, you would have something like.
//Psuedocode
for(client in clients){
if(client != messageSender){
user.send(message);
}
}
This is great for a low number of users, but I see a problem with this. First of all, there is a single point of failure which is the Node server. Second of all, the app will slow down as the number of clients grow. What is there to do then when we reach this bottleneck? Is there an architecture (horizontal/vertical scaling) that can be used to alleviate this problem?
For that "one day" when your chat app needs multiple, fault-tolerant node servers, and you want to use socket.io to cross communicate between the server and the client, there is a node.js module that fits the bill.
https://github.com/hookio/hook.io
It's basically an event emitting framework to cross communicate between multiple "things" -- such as multiple node servers.
It's relatively complicated to use, compared to most modules, which is understandable since this is a complex problem to solve.
That being said, you'd probably have to have a few thousand simultaneous users and lots of other problems before you begin to have problems with this.
Another thing you can do, is try to develop your application in a way so that if a connection is lost (which happens all the time anyway), eg. server goes down, client has network issues (eg. mobile user), etc, your application should be able to handle that and recover from such issues gracefully.
Since Node.js has a single event-loop thread, this single point of failure is written into its DNA. Even reloading a server after code changes require this thread to be stopped.
There are however a lot of tools available to handle such failures gracefully. You could use forever; a simple CLI tool for ensuring that a given script runs continuously. Other options include distribute and up. Distribute is a load balancing middleware for Node. Up builds on top of Distribute to offer zero downtime reloads using either a JavaScript API or command line interface:
Further reading I find you just need to use Redis Store with Socket.io to maintain connection references between two or more processes/ servers. These options have already been discussed extensively here and here.
There's also the option of using socket.io-clusterhub if you don't intend to use the Redis store.

Seeking tutorials and information on load-balancing between threads

I know the term "Load Balancing" can be very broad, but the subject I'm trying to explain is more specific, and I don't know the proper terminology. What I'm building is a set of Server/Client applications. The server needs to be able to handle a massive amount of data transfer, as well as client connections, so I started looking into multi-threading.
There's essentially 3 ways I can see implementing any sort of threading for the server...
One thread handling all requests (defeats the purpose of a thread if 500 clients are logged in)
One thread per user (which is risky to create 1 thread for each of the 500 clients)
Pool of threads which divide the work evenly for any number of clients (What I'm seeking)
The third one is what I'd like to know. This consists of a setup like this:
Maximum 250 threads running at once
500 clients will not create 500 threads, but share the 250
A Queue of requests will be pending to be passed into a thread
A thread is not tied down to a client, and vice-versa
Server decides which thread to send a request to based on activity (load balance)
I'm currently not seeking any code quite yet, but information on how a setup like this works, and preferably a tutorial to accomplish this in Delphi (XE2). Even a proper word or name to put on this subject would be sufficient so I can do the searching myself.
EDIT
I found it necessary to explain a little about what this will be used for. I will be streaming both commands and images, there will be a double-socket setup where there's one "Main Command Socket" and another "Add-on Image Streaming Socket". So really one connection is 2 socket connections.
Each connection to the server's main socket creates (or re-uses) an object representing all the data needed for that connection, including threads, images, settings, etc. For every connection to the main socket, a streaming socket is also connected. It's not always streaming images, but the command socket is always ready.
The point is that I already have a threading mechanism in my current setup (1 thread per session object) and I'd like to shift that over to a pool-like multithreading environment. The two connections together require a higher-level control over these threads, and I can't rely on something like Indy to keep these synchronized, I'd rather know how things are working than to learn to trust something else to do the work for me.
IOCP server. It's the only high-performance solution. It's essentially asynchronous in user mode, ('overlapped I/O in M$-speak), a pool of threads issue WSARecv, WSASend, AcceptEx calls and then all wait on an IOCP queue for completion records. When something useful happens, a kernel threadpool performs the actual I/O and then queues up the completion records.
You need at least a buffer class and socket class, (and probably others for high-performance - objectPool and pooledObject classes so you can make socket and buffer pools).
500 threads may not be an issue on a server class computer. A blocking TCP thread doesn't do much while it's waiting for the server to respond.
There's nothing stopping you from creating some type of work queue on the server side, served by a limited size pool of threads. A simple thread-safe TList works great as a queue, and you can easily put a message handler on each server thread for notifications.
Still, at some point you may have too much work, or too many threads, for the server to handle. This is usually handled by adding another application server.
To ensure scalability, code for the idea of multiple servers, and you can keep scaling by adding hardware.
There may be some reason to limit the number of actual work threads, such as limiting lock contention on a database, or something similar, however, in general, you distribute work by adding threads, and let the hardware (CPU, redirector, switch, NAS, etc.) schedule the load.
Your implementation is completely tied to the communications components you use. If you use Indy, or anything based on Indy, it is one thread per connection - period! There is no way to change this. Indy will scale to 100's of connections, but not 1000's. Your best hope to use thread pools with your communications components is IOCP, but here your choices are limited by the lack of third-party components. I have done all the investigation before and you can see my question at stackoverflow.com/questions/7150093/scalable-delphi-tcp-server-implementation.
I have a fully working distributed development framework (threading and comms) that has been used in production for over 3 years now across more than a half-dozen separate systems and basically covers everything you have asked so far. The code can be found on the web as well.

Number of threads in a middleware application

I am writing an application server (again, non-related with a question I already posted here) and I am wondering what are the strategies to use when creating worker threads that work on the database. Some preliminary dates: the server receives xml and sends back xml, all the requests query a database - each request could take a few milliseconds to a few seconds.
Say for example that your server services a small to medium number of clients which in turn send a small number of requests per connection. Is it safe to have one worker thread per connection or should it be per request? Also should a thread pool be used to limit the resources used by the server or a worker should be added each time a new connection/request is made?
Should the server limit the number of threads it creates to an upper limit?
Hope I am not too vague ... I can hardly keep my eyes open.
If you don't have extensive experience writing application servers is a daunting task. It can be eased by using frameworks like ACE that allow you to build different configurations of your app serving infrastructure like thread per connection, thread pools, leader follower and then load the appropriate configuration with an extensible service framework.
I would recommend to read these books on ACE to get
C++ Network Programming: Mastering Complexity Using ACE and Patterns
C++ Network Programming: Systematic Reuse with ACE and Frameworks
to get an idea about what the framework can do for you.
The way I write apps like this is to make the number of threads configurable via the command line and/or a configuration file. I then do some load testing with different numbers of threads - there is always an optimal number beyond which performance begins to degrade.
If you follow the model adopted by Java EE app server developers, there's a queue for incoming requests and a pool of worker threads to service them. It's one thread per request. When a worker thread fulfills a request it goes back into the pool. If the incoming requests show up faster than the worker thread pool can service them, the queue allows them to stack up until a worker thread is released. Both the queue size and the thread pool can be tuned to match for your situation.
I'd wonder why anyone would feel the need to write their own server from scratch, especially when the scenario you describe is solved so well by others. If your wish is education, good luck. If you think you're going to improve on what's been done in the past, I'd re-examine that assumption.

Resources