WebSocket and/or Request-Response - node.js

I'm creating a chat application, and one detail is that "acknowledgements" are crucial. I'll get to what that means. I'm trying to figure out what the best exchange protocol would be.
Scenario:
Alice sends Bob a message. Bob is offline, so the message is stored on the server. Bob connects to the server through a WebSocket connection. The server sends him messages that have been sent to him while he was away. This is where the problem arises. The WS API that's available for my app's ecosystem (Node.js, Nest.js specifically), has no pattern where it can wait for this message to be sent. The mechanism there seems to just be fire & forget. What if the payload is quite large and the connection drops while the message is being sent?
Now, I know socket.io has support for acknowledgements. But from what I've read, socket.io has some overhead and therefore less performance than optimal. Now whether that performance is something that I arguably need is another question, but I'm just trying to figure out how I can guarantee that the message has arrived on the other end. This means client-server and server-client directions. How can I await it? I know that one approach is to attach a unique ID to the socket event, and have the other side send you a confirmation that it received it. This is how socket.io does it if I'm not mistaken.
But my question there is how can I guarantee that the acknowledgement message was successfully sent? So then I'd need an "ack" for my "ack" and so on, so I'll always need one more acknowledgement so I don't know how that works.
What I though of as options is to use two REST endpoints to send and receive (or download) messages. You send when you send, but you receive when you receive a ping that there's messages for you to download. Now this could be done through a WebSocket connection where the server notifies the client about a new message and then the client can call this receive endpoint. This ping can also be done through a more managed solution like FCM. The pros with that approach are twofold:
First, I have the REST interface to use, which is a lot more practical
I have the Request-Response pattern to use, so I have a theoretical guarantee that things are arriving if I get a response
Now the problem with this approach is that there's a lot of overhead from opening a new HTTP connection every time I want to send or receive messages, if I'm not mistaken:
I have to wait for the initial request time to get to the server before I actually have to wait for the server to respond with messages. With the pure WebSockets case, I would theoretically then just wait for the response equivalent part there (?)
This wastes bandwidth as well.
So one more question, where can I find out which clients will actually re-use an existing HTTP connection like a WebSocket connection, if available and not create a new one? Do all clients do that? Is it only the browser? What about apps? Is it on the OS level?
So the final question is how do I solve this problem of "acknowledgements" and not waste time and bandwidth? Are any of my conclusion/questions wrong or uninformed, am I missing something?
Notes:
server is Node.js and client is Flutter
i know about the WAMP subprotocol, but for my ecosystem it doesn't have very reliable implementations

I'm not sure what your exact requirements or performance need,
but I did a project that also need reliable communication between client and server using websocket, the simplest I could think of was build request-response mechanism on top of websocket, and then build your application data on top of that.
here's high level overview how I implemented it:
implement request-response message using transaction to identify which response belongs to which request.
clients will have map storing transaction, when you send the message request wait for server to send a message response with the same transaction or wait
until timeout.
client wants to send message to server and construct the request as follow
{
"event": "sendMessage",
"type": "request",
"transaction": "<uuid/unique-value>",
"data": "<your-application-data>"
}
server parse the message and check that its a request with an event name sendMessage then call related function
server sends back response message to client
{
"event": "sendMessage",
"type": "response",
"transaction": "<uuid/unique-value>", // same unique value as in request
"data": "<your-application-data-result>"
}
because client has mapping which transaction belongs to which request, it is possible to match which request this response belongs to, if matched then complete the transaction

Related

rabbitmq - Problem recovering queue and resume socket messages

I am having serious problems to make messages delivery fail proof in a chat system.
Having several node.js and live communication via websocket to the clients, I use rabbit to callback the correct consumer at a specific node.
I declare my queues as {durable: true, prefetch:1, expires: 2*3600*1000, autoDelete: true}
consumerOption is {noAck: false, exclusive: false}
Once I receive a message from the server, I callback the server, get the message, and use message.ack(false)
Sometimes, a message appears with a pendent ACK in rabbit and as I would expect, the consumers stop being callbacked.
Here is my overall strategy:
1- when socket disconnects, I recover the queue using queue.recover() during the the reconnection/connection (more frequent).
2- When I send a message to the server and not receive it back, I send a message to the server to recover the queue.
3- I use the socket callback function to send the ack confirmation. On the server, I use message.ack(false) The server keeps a hashmap {[ackCode: string]: RabbitMessage} and I send the ackCode back to the server, so it can retrieve the correct message and ack it.
5- If client is not receiving any message for 2 minutes, I ask to the server to recover the queue.
The step 5 should not exist but even with this step, sometimes I send a recover queue request to the server, the server executes the command, but nothing happens and chat is freeze.
These are very difficult events to debug. I am using a Typescript library which is 3 year without any commit and this could be one of the causes.
Regarding the strategy, is it correct? Any idea on what I could be facing?
What I've learned and why I think that I couldn't use rabbit to solve the specific problem mentioned in the original post.
The domain: A "chat" where the message order is very important (some are chains) and we must be sure that the message will be delivery if/when the client is online.
The problem: We have several node.js servers, sockets are spread among them. Sockets falls all time, and it is common to a client connection that was in the first server be connected again in another. We don't use cookies, session affinity by IP won't handle the issue.
Limitations: That being said, I can't activate a consumer that is currently activated in another server, so if a customer Queue is tied to server 1 I can't activate it in server 2. And all the messages that need to be sent are tied to this specific queue.
Another limitation is that I don't have an easy way to consume queues, re-queue, to know in advance how much not ack messages I have in the queue, aggregate them and bulk send them via socket.
The solution: I am no longer using {noAck: false} and I am controlling the ack in a Redis queue. Thus, I am using Rabbit as a pub-sub, to callback the correct consumers to send the message using the socket. Rabbit wake me up, first thing I do is to put the message at the end of a redis queue. When I send a message via socket, I always start sending the messages from the beginning of the queue, regardless of the message that have just woke me up. I send the message, wait for the callback event, If it is not ok, I re-queue the messages,
After decoupling the pub-sub from the queue/ack control, I can now easily change my rabbit pub/sub from one server to another (declaring using socket.id and no more with the client queue), with no concern of loosing any message. Also, now I am capable of much more advanced operations on my queue.
As my use case don't allow me to use the full power of exchanges/binds (i have complex routing rules), I am evaluating the possibility of changing from rabbit to redis pub/sub, but in this case, I would continue to differentiate pub/sub from the queue.
After more than a month trying to make rabbit working in this scenery, I think that I was using a good technology to the wrong use case. It is much simpler now.

Is it possible to have server to server communication with websockets?

I'm trying to have 2 servers communicate with each other, I'm pretty new to websockets so its kind of confusing. Also, just to put it out there, i'm not trying to do this: websocket communication between servers;
My goal here is to basically use a socket to read data from another server (if this is possible?) I'll try to easily explain more below;
We'll assume there is a website called https://www.test.com (going to this website returns an object)
With a normal HTTP request, you would just do:
$.get('https://www.test.com').success(function (r) {
console.log(r)
})
And this would return r, which is an object thats something like this {test:'1'};
Now from what I understand with websockets, is that you cannot return data from them because you don't actually 'request' data, you just send data through said socket.
Since I know what test.com returns, and I know all of the headers that i'm going to need, is it possible to just open a socket with test.com and wait for that data to be changed without requesting it?
I understand how client-server communication works with socketio/websockets im just not sure if its possible to do server-server communication.
If anyone has any links to documentation or anything trying to help explain, it would be much appreciated, I just want to learn how this works. (or if its even possible)
Yes, I you can do what (assuming I understood your needs correctly). You can establish a websocket connection between two servers and then either side can just send data to the other. That will trigger an event at the other server and it will receive the sent data as part of that event. You can do this operation either direction from serverA to serverB or vice versa or both.
In node.js, everything is event driven. So, you would establish the webSocket connection and then just set up an event handler to be triggered when data arrives. The other server can then just send new data whenever it has updated data to send. This is referred to as the "push" model. So, rather than serverA asking serverB is it has any new data, you establish the webSocket connection and serverB just sends new data to serverA whenever that new data is available. Done correctly, this is both more efficient and more timely (as there is no polling interval and no cycles wasted asking for data when there is nothing new).
The identical model can be used between servers or client to server. The only difference with the client/server model is that the webSocket must be initially established client to server. With the server to server model, either server can initiate the connection.
You can think of a webSocket connection like establishing a phone call. Once the phone call is established, either side can just say something and the other end hears what they're saying. The webSocket connection is similar. Once its established, either side can just send some data to the other end and the other end will receive it. It's an open pipeline ready to have data sent either way. In node.js, when data arrives on that pipeline, it triggers and event so the listener will get that event and see the data that was sent.

Why can't I use res.json() twice in one post request?

I've got an chatbot app where I want to send one message e.g. res.json("Hello") from express, then another message later e.g. res.json("How are you doing"), but want to process some code between the two.
My code seems to have some problems with this, because when I delete the first res.json() then the second one works fine and doesn't cause any problems.
Looking in my heroku logs, I get lots of gobbledy gook response from the server, with an IncomingMessage = {}, containing readableState and Server objects when I include both of these res.json() functions.
Any help would be much appreciated.
HTTP is request/response. Client sends a request, server sends ONE response. Your first res.json() is your ONE response. You can't send another response to that same request. If it's just a matter of collecting all the data before sending the one response, you can rethink your code to collect all the data before sending the one response.
But, what you appear to be looking for is "server push" where the server can send data to the client continually whenever it wants to. The usual solution for that is a webSocket connection (or socket.io which is built on top of webSocket and adds more features).
In the webSocket/socket.io architecture, the client makes a connection the server and the connection is kept open indefinitely. Then either side of the connection can send messages to the other end. This is most useful when the server wants to "push" data to the client at any time. In this case, the client establishes the connection, then the server can send data to the client over that connection at any time. The client registers a listener for incoming messages and will be notified anytime the server sends it some data.
Both webSocket and socket.io are fully supported in modern browsers and in node.js. I would personally recommend using socket.io because some of the features it adds (a messaging layer, auto-reconnect, etc...) are very useful.
To use a continuously connected socket like this, you will have to make sure your hosting infrastructure is properly configured to allow it.
res.json() always sends the response to the client immediately (calling it again will cause an error). If you need to gradually build up a response then you can progressively decorate a plain old javascript object; for example, appending items to an array. When you are done call res.json() with the constructed response.
But you should post your code so we can see what's happening.

how to make sure the http response was delivered?

To respond a http request, we can just use return "content" in the method function.
But for some mission-critical use cases, I would like to make sure the http
200 OK response was delivered. Any idea?
The HTTP protocol doesn't work that way. If you need an acknowledgement then you need the client to send the acknowledgement to you.
Or you should look at implementing a bi-direction socket (a sample library is socket.io) where the client can send the ACK. If it is mission critical, then don't let it be on just http, use websockets
Also you can use AJAX callbacks to gather acknowledgment. One way of creating such a solution would be UUID generated for every request and returned as a part of header
$ curl -v http://domain/url
....
response:
X-ACK-Token: 89080-3e432423-234234-23-42323
and then client make a call again
$ curl http://domain/ack/89080-3e432423-234234-23-42323
So the server would know that the given response has been acknowledge by the client. But you cannot enforce automatic ACK, it is still on the client to send it, if they don't, you have no way of knowing
PS: The UUID is not an actual UUID here, just for example shared as random number
Take a look at Microsofts asynchronous server socket.
An asynchronous server socket requires a method to begin accepting connection requests from the network, a callback method to handle the connection requests and begin receiving data from the network, and a callback method to end receiving the data (this is where your client could respond with the success or failure of the HTTP request that was made).
Example
It is not possible with HTTP, if for some reason you can't use Sockets because your implementation requires HTTP (like an API) you must acknowledge a timeout strategy with your client.
It depends on how much cases you want to handle, but for example you can state something like this:
Client generate internal identifier and send HTTP request including that "ClientID" (like a timestamp or a random number) either in the Headers or as a Body parameter.
Server responds 200 OK (or error, does not matter)
Client waits for server answer 60 seconds (you define your maximum timeout).
If it receives the response, handle it and finish.
If it does NOT receive the answer, try again after the timeout including the same "ClientID" generated in the step 1.
Server detects that the "ClientID" was already received.
Either return 409 Conflict informing that it "Already exists" and the client should know how to handle it.
Or just return 200 OK and the client never knew that it was received the first time.
Again, this depends a lot on your business / technical requirements. Because you could even get two or more consecutive loops of timeout handle.
Hope you get an idea.
as #tarun-lalwani already written is the http protocol not designed for that. What you can do is to let the app create a file and your program checks after the 200 respone the existence and the time of the remote file. This have the implication that every 200 response requires another request for the check file

send Session Description from node server to client

Do I need to use a websocket to send JSON data to my client? (it's a tiny session description)
Currently my client-side code sends a session description via XHR to my Node.js server. After receipt, my node server needs to send this down to the other client in the 'room'.
I can achieve this using socket.io, but is it possible to do anything a bit faster/ more secure, like XHR for example?
If you just want to receive the offer from the other side and nothing else, I would suggest you to try HTML5 Server Sent Events.
But this may bring problems due to different browsers support, so I would use a simple long pooling request. Since you only want to get the SDP offer, the implementation is pretty simple.
No, you don't need to use the WebSocket API to send JSON data from client to client via a server, but unless you use Google's proprietary App Engine Channel APIs, then the WebSocket API is probably your best choice.
Also, please keep in mind that you're not only sending session descriptions, but also candidate info (multiple times) as well as other arbitrary data that you might need to start/close sessions, etc.
As far as I know, the WebSocket API is the fastest solution (faster than XHR) for signalling because all the overhead involved with multiple HTTP requests is non-existent after the initial handshake.
If you want to code things yourself, I'd start reading the latest WebSocket draft and learning how to code the WebSocket server-side script yourself or else you will pretty much have to rely on a WebSocket library like Socket.IO or a proprietary solution like Google's App Engine Channel APIs.
How about using the 303 HTTP status code?
The first client send the session description to resource X, the server acknowledges the receipt and responds with a 303 status code that points to a newly created resource Y that accumulates other clients session descriptions.
The first client polls resource X until it changes.
The second client send its session description to resource A, the server acknowledges the receipt and updates resource Y. The first client notices the update with the next poll and will now have the second client's session information.

Resources