Is it possible to do bi-directional (P2P) messaging in Apache Avro over a single connection?
Eg, if I defined a protocol in Avro with one method PING, is it possible for the PING request to be made by both clients without establishing a second connection?
Related
I have a REST server to handle communication between my database server and Android/iOS devices, the REST server is also able to send push messages via Firebase. My second server is a UDP server, that receive and send messages to a IOT device, both server are written in Node.js and running on different EC2 instances.
Then my UDP server receive a message from the IOT device, lets say some GPS data. Is there a good way to call some methods from my REST server via the UDP server? Or send the data to it ? Are there any ways that the two server can communicate with each other ?
You could implement a separate API on your REST server that would be called from your UDP server.
Interprocess communication is a wide topic, there are plenty of ways to do it, it all depends on your needs.
via http
via tcp/ip or udp
via a database (or even a file)
using named sockets (on unix/linux)
using a pub-sub library
using a message queue library
by piping standard input/output
I am using Socket.IO with a MEAN stack and it's been excellent for low latency and bidirectional communication, but what would be the major draw back for using it for relatively static data as well as dynamic?
My assumption is that it would be more apt for sending more dynamic content. That being said, once a socket connection is established, how relevant is the amount of communication being done? Is there a time where it would be more appropriate to use http instead when a connection is constantly established throughout the user's direct interaction with the application?
Thanks!
WebSockets are a bidirectional data exchange within a HTTP connection. So the question is not if you use HTTP or WebSockets, because there is no WebSockets without HTTP. WebSockets are often confused with simple (BSD) sockets, but WebSockets are actually a socket-like layer inside a HTTP connection which is inside a TCP connection which uses "real" sockets. Or for anybody familiar with OSI layers: it as a layer 4 (transport) encapsulated inside layer 7 (application) and the main reason for doing it this strange way instead of using layer 4 directly is that plain sockets to ports outside of HTTP, SMTP and a few other protocols are no longer possible because of all the port blocking firewalls.
So the question should be more if you use simple HTTP or if you need to use WebSockets (inside HTTP).
With simple HTTP the client sends a request and the server sends the response back. The format is well defined and browser and server transparently support compression, caching and other optimizations. But this simple request-response pattern is limited, because there is no way to push data from server to client or to have a more (BSD) socket like behavior where both client and server can send any data at any time. There are various more or less good workarounds for this, like long polling.
WebSockets gives you a bidirectional communication, which makes it possible for the server to push data to the client or to send data in both directions at any time. And once the WebSocket connection is established by upgrading an existing HTTP connection the overhead for the data itself is very small, much smaller then with a full new HTTP request. While this sounds good you loose all the advantages of simple request-response HTTP like caching at the client or in proxies. And because client and server need resources to keep the underlying TCP connection open it needs more resources, which can be relevant for a busy server. Also, WebSockets might give you more trouble with middleboxes (like proxies or firewalls) then simple HTTP does.
In summary: if you don't need the advantages of WebSockets stay with simple request-response HTTP.
Given a standard Node.js HTTP library, or an existing REST client library, what would be the most feasible way to allow such a library to perform those HTTP requests over the top of my own protocol?
To put this another way: I aim provide a module which looks like a HTTP client. It accepts HTTP requests headers, and returns HTTP responses. What options should I consider to adapt an existing REST library to work with my 'pseudo' HTTP client module, as opposed to the standard Node library HTTP client?
Further background information
I wish to create a server application (based on Node.js) which makes HTTP REST requests to a remote embedded device. However, due to NAT, it is not possible for the application server to make client TCP connections directly to the remote device. Therefore, to get around NAT, I will devise my own proprietary protocol which involves the remote device initiating a persistent connection to the application server. Then, once that persistent connection is established, the Node.js application shall be able to make HTTP requests back over that persistent connection to the networked device.
My objective is therefore to create a Node.js module which acts as a 'bridge' layer between incoming socket connections from the networked devices, and the main application which makes REST requests. The aim is that the application would make REST requests as if it were making HTTP client requests to a server, when in fact the HTTP requests and responses are being conveyed on top of the proprietary protocol.
An option I'm presently considering is for my 'bridge' module to implement an interface that mimics that of http.request(options,[callback]) and somehow enforce a REST client library to use this interface instead of the Node HTTP client. Supposedly at minimum I'd have to lightly modify whichever REST client library I'd use to achieve this.
As explained above, I'm essentially trying to create my own form of NAT traversal using an intermediary server. The intermediary server would provide the front-end UI to users, and make back-end data requests to the embedded networked devices. Connections between embedded devices and application server would be persistent, and initiated from the embedded devices, to avoid the usual NAT headaches (i.e. the requirement to configure port forwarding).
Though I mentioned earlier I'd achieve the device-to-server connection using my own protocol over a raw socket connection, the mechanism I'm actually experimenting with right now is to use plain HTTP together with long-polling. The embedded device initiates a HTTP connection to the application server and delayed responses are used to convey data back to the device when the server has something to send. I would then 'tunnel' HTTP requests going in the reverse direction over the top of this.
Therefore, in simple terms, my 'bridge' layer is something that accepts HTTP connections inwards from both sides (outside device connections, and inside web application REST requests). By using long-polling it would effectively convey requests and responses between the connected clients.
Instead of replacing the http layer, create a man-in-the-middle. Create an http server in node that is the target for all of the rest requests. It then transfers the request onto the proprietary protocol and handles the response by translating back to rest.
This way you don't have to hack the rest code and can even swap it out for another library if needed.
There will be no human being in the loop, and both endpoints are autonomous Node.js applications operating as independent services.
Endpoint A is responsible for contacting Endpoint B via secure web socket, and maintaining that connection 24/7/365.
Both endpoints will initiate messages independently (without human intervention), and both endpoints will have an API (RESTful or otherwise) to receive and process messages. You might say that each endpoint is both a client of, and a server to, the other endpoint.
I am considering frameworks like Sails.js and LoopBack (implemented on both endpoints), as well as simply passing JSON messages over ws, but remain unclear what the most idiomatic approach would be.
Web Sockets have a lot of overhead for connecting to browsers and what not, since they try to remain compatible with HTTP. If you're just connecting a pair of servers, a simple TCP connection will suffice. You can use the net module for this.
Now, once you have that connection, how do you initiate communication? You could go through the trouble of making your own protocol, but I don't recommend it. I found that a simple RPC was easiest. You can use the rpc-stream package over any duplex stream (including your TCP socket).
For my own application, I actually installed socket.io-client and let my servers use it for RPC. Although if I were to do it again, I would use rpc-stream to skip all the overhead required for setting up a Web Socket connection.
I am using RabbitMQ (cluster) and connecting to it using a Node.js client (node-amqp - https://github.com/postwait/node-amqp).
The RabbitMQ docs states that handling a failover scenario (cluster's node failure) should be handled by the client meaning the client should detect the fail and connect to another node in the cluster.
What is the simplest way to support this failover balancing. Is the node-amqp client support this ?
Any example or solution will be appreciated.
Thanks.
node-amqp has support for multiple server hosts in the connection object. So input host: as an array of hosts (unfortunately only the host part accepts an array, so other parameters like port and authentication has to match across you rabbit servers).