i am working on NodeJs, my question is:
process A runs on computer A, and process B runs on computer B, now i want to broadcast a message to both of them, how can i make it?
You should look into a message queue.
Redis has pub/sub capability and is a common backend component.
You could also consider ZeroMQ depending on what your requirements are.
You're probably looking for dnode, a good very good system of communication between node.js processes.
Related
I am currently planning a complicated networking project on Windows IoT Enterprise. Basically, I will have a C program keeping alive a special network interface. This C program should receive tasks in some way from other programs on the same host, which will generally be written in all sorts of languages (e.g. node.js). I have never did such cooperation between tasks. Do you have any advice on how a node.js server can pass information to an already running C program, and preferably receive a success code or an error message?
It is very important for me that this process is as fast as possible, as this solution will handle multiple thousand requests per second.
In one of the comments I was pointed towards zeroMQ, and I am now using it successfully in my application, thank you for the help!
I'm developing a lightweight framework to work as a coordinator in a Robotics competition I compete.
My idea, is to have agnostic programs about the whole, just with inputs that might triggers outputs. I then, connect those outputs to inputs, and can have different behaviours with the same modules, without hard work.
I'm planning on doing this with Node.js and WebKit, to allow a nice UI for modifying the process. However, each "module" might not really be a code wrapped in some javascript class-like function, it might be a real Thread, running maybe some C++ native code (without Node.js), or even a Python program.
What I'm facing now, is a fast way, and also generic, to exchange data among processes. I have read about it, but haven't got to any conclusions...
Here are the 3 methods I found out:
Local Socket: Uses the localhost to dispatch a broadcast to a port
Unix Socket: Maybe more efficient than the above (but using filesystem?)
Stdin/Out communication: When a process is launched by Node.js, binding the stdin and stdout can be used to communicate between the program.
So, I have those 3 ways of doing it, what should I use mostly? I need things to communicate REALLY fast (data might go through 5 different processes, and I need that not to exceed 2ms)
It is not uncommon to think about distributing the logic of an application between different servers whether because of scalability, security or any other arbitrary concern. In such a scenario it's important to have reliable channels of communication between the separate modules or applications.
A practical case could look like this:
(Server #1) You have a DB table filling up with tasks (in the form of table entries) that need to be processed.
(Server #2) You have an arbitrator that fetches these tasks one by one so as to handle them in some specific fashion.
(Server #3 -- #n) You have multiple worker applications that receive tasks from the arbitrator and return the results back to it.
Now imagine that everything is programmed with Node.js. You want the worker servers to be able to spawn when more resources are needed and be terminated when the processing load is low. When a worker node is created it has to connect back to the arbitrator to signal that it is ready to receive tasks.
What are the available options for communicating the worker nodes with the arbitrator such that the arbitrator can detect when a new worker node is connecting to it and data between both can start to flow. Or, in other words, how to go about creating reliable state-full channels of communication between two remote Node.js applications?
As much as this shouldn't turn into a battle of messaging technologies, another option is RabbitMQ. They have quick tutorials for both worker queues and remote procedure calls (rpc).
Although these tutorials are in python, they are still easy to follow though (and I believe a bit of googling will find you Node translations on github).
In your situation, Rabbit will be able to handle dispatching messages to particular workers, however I think you will have to write your scaling logic yourself.
zeromq is a good option for that.
So the first app that people usually build with SocketIO and Node is usually a chatting app. This chatting app basically has 1 Node server that will broadcast to multiple clients. In the Node code, you would have something like.
//Psuedocode
for(client in clients){
if(client != messageSender){
user.send(message);
}
}
This is great for a low number of users, but I see a problem with this. First of all, there is a single point of failure which is the Node server. Second of all, the app will slow down as the number of clients grow. What is there to do then when we reach this bottleneck? Is there an architecture (horizontal/vertical scaling) that can be used to alleviate this problem?
For that "one day" when your chat app needs multiple, fault-tolerant node servers, and you want to use socket.io to cross communicate between the server and the client, there is a node.js module that fits the bill.
https://github.com/hookio/hook.io
It's basically an event emitting framework to cross communicate between multiple "things" -- such as multiple node servers.
It's relatively complicated to use, compared to most modules, which is understandable since this is a complex problem to solve.
That being said, you'd probably have to have a few thousand simultaneous users and lots of other problems before you begin to have problems with this.
Another thing you can do, is try to develop your application in a way so that if a connection is lost (which happens all the time anyway), eg. server goes down, client has network issues (eg. mobile user), etc, your application should be able to handle that and recover from such issues gracefully.
Since Node.js has a single event-loop thread, this single point of failure is written into its DNA. Even reloading a server after code changes require this thread to be stopped.
There are however a lot of tools available to handle such failures gracefully. You could use forever; a simple CLI tool for ensuring that a given script runs continuously. Other options include distribute and up. Distribute is a load balancing middleware for Node. Up builds on top of Distribute to offer zero downtime reloads using either a JavaScript API or command line interface:
Further reading I find you just need to use Redis Store with Socket.io to maintain connection references between two or more processes/ servers. These options have already been discussed extensively here and here.
There's also the option of using socket.io-clusterhub if you don't intend to use the Redis store.
My understanding is that Node is an 'Event' driven as opposed to sequentially driven server application. This I've come to understand means, that for event driven software, the user is in command, he can create an event at any time and the server is in a state such that it can respond, whereas with sequential software (like a DOS prompt), the application tells the user when its 'ok' to response, and may at any given time be not available (due to some other process).
Further, my understanding is that applications like Node and EventMachine use a reactor of sorts.. they wait for an 'event' to occur, and using a callback they delegate the task to some other worker. Ok.. so then, what about Rails & Passenger?
Rails might use a server like NGINX with Passenger to spawn new processes when 'events' are received by the system. Is this not conceptually the same idea? If it is, is it just the processing overhead that is really separating the two where Passenger would need to potentially spawn a new rails instance while, node is already waiting to handle the request?
Node.js is event driven non blocking programming language. The key is the non blocking part. Node doesn't spawn for other processes. It runs in one thread (this is for starters... you can actually spawn it now through some modules - i think - but that's another talk)
Anyway this is different from other typical programming languages where you receive a request and the thread is locked until it has an answer. If you assign it to another thread, that thread is still locked...
In node you never lock. You receive request and the thread continues to receive requests. When a request is processed, the callback is called.
Hope I made myself understand and I used the right terms ;)
Anyway, if you want this video is nicee: http://www.youtube.com/watch?v=jo_B4LTHi3I
The non-blocking/evented I/O as jribeiro described is one part of the answer. Ruby applications tend to be written using blocking I/O, and using processes and threads for concurrency.
However, non-blocking and evented I/O are not inherent to Node. You can achieve the same thing in Ruby by using EventMachine, and an in-process evented server like Thin. If you program directly against EventMachine and Thin, then it is architecturally almost the same as Node. That being said, the Ruby ecosystem does not have as many event-friendly libraries and documentation as Node does, so it takes a bit of skill to achieve the same thing in Ruby.
Conversely, the way Phusion Passenger manages processes - i.e. by spawning multiple processes and load balancing requests between them, and supervising processes - is not unique to Ruby. In fact, Phusion Passenger introduced Node.js support recently. The Node.js support was open sourced today.