I want to use a communication mechanism between a server and a linux client, for messaging and discovery. My only requirement is that, the client should be as lightweight as possible. On searching internet, I cam across XMPP and MQTT. But, I am not sure, which of its version is the most lightweight. Can anybody please guide me regarding which is the most lightweight of all. Please let me know, if any other such mechanism exists.
This isn't an easy question, because it's not clear which aspects of "lightweightness" you are looking for. Are you looking for small implementation (in file size), for minimum CPU usage or minimum network requirements.
MQTT and XMPP can both be pretty slim on the client side. Out of the box without any extensions, MQTT most of the time is (much) more lightweight on the wire, it's a binary protocol while XMPP is (without any extensions) XML based. MQTT focuses on efficient Pub/Sub messaging, if you need something fancy on top, you should choose a sophisticated broker (click here for an overview). XMPP has a bit more out of the box. If you don't need things like friendship requests on the protocol level, MQTT is a solid choice.
Again, both protocols have their use cases (which IMHO don't intersect too much). A pretty good overview of MQTT, XMPP, CoAP and HTTP can be found here on slideshare.
Related
I am developing a web application that has a desktop client written in Java. I am using WebSockets to communicate between the NodeJS server and the web client.
I am trying to decide whether to use a WebSocket or a normal TCP socket to communicate between the NodeJS server and the desktop client.
As I understand it, it would be easier to use a WebSocket but also a little bit heavier weight.
How does one make this decision?
It depends on your exact use case:
WebSockets are established by first doing a HTTP connection and then upgrading this to the WebSocket protocol. Thus the overhead needed to exchange the first message considerably higher than with simple sockets. But if you just keep the connection open and exchange all messages through a single established socket then this does not matter much.
There is a small performance overhead because of the masking. But these are mainly simple computations (XOR) so they should not matter much.
It takes a little bit more bandwidth because of the framing. But this should not really matter much.
On the positive side it works much better (but not always perfect) together with existing firewalls and proxies so it integrates easier into existing infrastructures. You can also use it more easily with TLS because this is just WebSocket upgrade inside HTTPS instead of inside HTTP.
Thus if you must integrate into existing infrastructure with firewalls and proxies then WebSockets might be the better choice. If your application is performance heavy and needs every tiny bit of bandwidth, processor time and the lowest (initial) latency then plain sockets are better.
Apart from that if latency is really a problem (like with real time audio) then you might better not use any TCP based protocol, like TCP sockets or WebSockets. In this case it might be better to use UDP and deal with the potential packet loss, reordering and duplication inside your application.
Is there a proper WebSocket library for whatever technology you implement the desktop client in? The websocket protocol is not trivial and it's a rather new technology which is not universally supported yet. When you have to implement WebSocket from scratch using pure TCP/IP sockets you can plan to spend a few days until you have the basic protocol implementation working and can start to implement your own protocol on top of it (been there, done that, threw it away the moment a library was available which worked better than my own implementation).
But if you can find a Websocket implementation for your desktop client, then you could save some work and complexity on the server-side by having both the website and the desktop client communicate with the node.js backend in the same way.
I am looking to implement some sort of chat server. And I want it to scale. This seems like a big question, so I guess I expect the answers to be direction pointers, sort of exploratory.
The end-user clients are web or phone client. I think some sort of websocket implementation, such as Socket.IO is nice.
On the server side I wish to use Node.js. I want the architecture to be scalable so that the number of users are not limited (well, within reason, the chance of big hit is not expected, and if it is, the chance of having smarter, experienced people to work on it is reasonable instead of currently just me coding) The number of users per chatroom is hopefully not limited, or maybe some fixed large number. And that means I need to scale horizontally using several servers written in Node.
Suppose some load balancer (and hopefully in the future not a single point of failure, but I don't know how I would achieve that, or maybe just move to AWS) are dispatching SocketIO connections from the end clients to the chat servers. Different users connection to different servers may be in the same room, so the messages need to be send to other servers.
How would I feasibly implement something like this? Hopefully not too complex.
Questions:
(1) If all servers need to handle all messages as users can be logged on via any of the servers, does this scale?
(2) Do I need some sort of message queue for the servers to talk among them? Is Pub-sub from Rabbitmq usable for this? Or if zeromq, how would I scale with pub sub? The Zeromq guide is has explanations for scaling to more than one server with REQ/REP type of applications. But not Pub Sub.
(3) Or should I start with XMPP?
I am hoping to make it work as easy as possible.
There's a rather good explanation at the Socket.io site. Have a look at
http://socket.io/docs/using-multiple-nodes/
It suggests using Nginx as HTTP load balancer, Node.js clustering (with sticky sessions) and Redis as the message backend.
I think your goals should be achievable with little to none coding involved, only using the given modules and configuration mechanisms.
I'm in the process of implementing a streaming protocol in JavaScript. The protocol is defined in terms of byte streams, not messages. I'd like to be able to talk to browsers using this protocol.
I've used Socket.io in the past for easy cross-browser full-duplex networking. However, in this case, I need BSD-style sockets. Ideally, I could code to the Node.js streams API and have the same (or very similar) code work in the browser.
Is there something like Socket.io for byte streams? ie Well tested, cross-browser, multi-transport, heart beating, etc.
So far, http://binaryjs.com/ is the closest to what I need. Unfortunately, the documentation suggests it is somewhat immature. I'd be very happy to find a more stable library with wider browser support.
Socket.IO uses lots of tech behind the scenes in order to make it very accessible and reliable. Lots of users will have Long Polling fallback, which is just pure HTTP protocol.
While WebSockets do support binary type of messages, it will be not the same as Long Polling or any other fallback tech, so Socket.IO will not support it as far as it is not something on all transports.
As well WebSockets and Socket.IO is purely Message Based communication protocol. In case with WebSockets it has framing around each message which will drive overhead for streaming.
What you need is Stream based communication, but not Message based. As I am aware this is long topic, and still not clear in web world.
Although you could look into WebRTC as future possibility for streaming data and it might meet your needs.
Some other options is to use plugins or extensions for browsers, like flash, unity, bespoke stuff and so on, in order to enable real streaming features.
I have built a little system that uses dnode, shoe and browserify on the client, and NodeJS and dnode/shoe on the server end. I'm wondering if it is a good idea to use dnode (RPC) as the sole protocol for a real-time web application.
Let's look at the benefits of DNode or any other RPC interface. I like being able to call functions remotely (RPC). It definitely beats Ajax because you get a consistent interface for communicating from the client to the server and server to the client. I'm also betting that you get a small measure of performance over Ajax because of the HTTP overhead involved with Ajax.
However, using RPC, you have to deal with load balancing and the client connections on the server. But this goes with any websocket implementation. But, with other websocket implementation, you have a more traditional event based system, where the client listens to events from the server and responds to those events. I tried replicating this sort of interface using EventEmitters, but it's awful, and I keep getting warnings about too many handlers. Ugh!
I'm looking to achieve a lightweight, clean, interface that I can use to develop my application with. One that feels robust and is able to scale to many clients. It needs to feel solid.
I'm not really sure what my question is in writing this post. I'm tasked to update this codebase I wrote so that connections aren't lost, and it's overall more robust. I guess I'm just desperate for advice or consulting with my application. Is there someone so willing as to go face-to-face with me on discussing this topic (RPC and real-time web applications)?
Thanks for reading.
I have been investigating some of the same topics as it seemed to me than some of the RPC libs were very cool, but not altogether practical for large scale apps. I actually started with NowJS, realized it was a dead project, moved to DNode/Shoe/Browserify, and finally have moved on to SocketStream in an attempt to offload some of the dirty work to a project that has a unified goal. I really didn't want rewrite what others had already done on this subject and socketstream makes that easy. To get back to your question, as you can see on their page, SocketStream uses sticky sessions. This is a big assumption but one that probably can't be worked around at the moment without further developments. The reason I mention it is that they talk about some of the things they are working on as far as scaling goes. Might we worth a read or reaching out to the developer to see if you could talk things over with him. Good luck!
MsgPack ?
JSON-RPC ?
Socket.io (is it possible ? how ?)
EDIT:
I am talking about 2 node processes each one on a different physical machine;
I don't understand how redis can help me on this...
I'm not really clear on whether you are looking for ways to make two node servers on two physical machines "talk to each other", or two node.js server processes on one machine.
(You could edit your question to make it clearer).
You could look at:
protocol buffers for node
MsgPack-RPC for node
Websocket.MQ
dnode --This uses socket.io as the transport layer
IPCNode
AMQP with node-amqp or node-amqp and something like RabbitMQ
Or you could go with a document based db like redis
Note: some of these may need some updating
I hope this helps
I would go for redis. The pubsub semantics are pretty sweet. The node_redis client library is very fast because it can use the lightning fast c-extension-library named hiredis. I would just use json as my encoding. That will probably be more than fast enough.
You could also use DNode to do your communication if you like. I also believe it has socket.io capabilities. You should have a look at the source code to find this out.
It is not really clear from your question what do you mean by a Node server talking to another server. You can use anything from sending UDP packets, making TCP connections, HTTP connections to using any of the high-level mechanisms that others have already pointed out.
For an interesting scenerio of Node processes communication you may take a look at
the 2010 JSConf.eu talk by Mikeal Rogers. He explains how to use CouchDB to do that. Very interesting talk.