I have read about TChannel which is a networking framing protocol used for general RPC.
https://github.com/uber/tchannel/blob/master/docs/protocol.md
But I misunderstand some concepts.
"Tchannel is a bi-directional request/response protocol. Each connection between peers is considered equivalent, regardless of which side initiated it. It's possible, perhaps even desirable, to have multiple connections between the same pair of peers. Message ids are scoped to a connection. It is not valid to send a request down one connection and a response down another."
What is bi-direction protocol?
Start considering the "request/response" part in the statement. It means that for every successful request sent by the host A there will be always a response sent back by host B.
This does not necessarly implies a syncronous behaviour (e.g. A block waiting for B's reply). Indeed TChannel works with an async behavior. Many requests can be sent in sequence, and the requester expects an out of order responses. In this way, the slow requests do not block faster requests.
How this work? Very simple. A specifies a message id when sending a request. When B responds, it uses the same message id for the response. In this way A can easily correlate a response with the corresponding request.
Now look at the "bi-directional" part in the statement. It means that if host B wants to send a request to host A, it can re-use the same channel that was created originally by A. Despite there is a request/response logic, A and B are considered peers, and each connection between peers is considered equivalent, regardless of which host initiated it.
Related
I'm creating a chat application, and one detail is that "acknowledgements" are crucial. I'll get to what that means. I'm trying to figure out what the best exchange protocol would be.
Scenario:
Alice sends Bob a message. Bob is offline, so the message is stored on the server. Bob connects to the server through a WebSocket connection. The server sends him messages that have been sent to him while he was away. This is where the problem arises. The WS API that's available for my app's ecosystem (Node.js, Nest.js specifically), has no pattern where it can wait for this message to be sent. The mechanism there seems to just be fire & forget. What if the payload is quite large and the connection drops while the message is being sent?
Now, I know socket.io has support for acknowledgements. But from what I've read, socket.io has some overhead and therefore less performance than optimal. Now whether that performance is something that I arguably need is another question, but I'm just trying to figure out how I can guarantee that the message has arrived on the other end. This means client-server and server-client directions. How can I await it? I know that one approach is to attach a unique ID to the socket event, and have the other side send you a confirmation that it received it. This is how socket.io does it if I'm not mistaken.
But my question there is how can I guarantee that the acknowledgement message was successfully sent? So then I'd need an "ack" for my "ack" and so on, so I'll always need one more acknowledgement so I don't know how that works.
What I though of as options is to use two REST endpoints to send and receive (or download) messages. You send when you send, but you receive when you receive a ping that there's messages for you to download. Now this could be done through a WebSocket connection where the server notifies the client about a new message and then the client can call this receive endpoint. This ping can also be done through a more managed solution like FCM. The pros with that approach are twofold:
First, I have the REST interface to use, which is a lot more practical
I have the Request-Response pattern to use, so I have a theoretical guarantee that things are arriving if I get a response
Now the problem with this approach is that there's a lot of overhead from opening a new HTTP connection every time I want to send or receive messages, if I'm not mistaken:
I have to wait for the initial request time to get to the server before I actually have to wait for the server to respond with messages. With the pure WebSockets case, I would theoretically then just wait for the response equivalent part there (?)
This wastes bandwidth as well.
So one more question, where can I find out which clients will actually re-use an existing HTTP connection like a WebSocket connection, if available and not create a new one? Do all clients do that? Is it only the browser? What about apps? Is it on the OS level?
So the final question is how do I solve this problem of "acknowledgements" and not waste time and bandwidth? Are any of my conclusion/questions wrong or uninformed, am I missing something?
Notes:
server is Node.js and client is Flutter
i know about the WAMP subprotocol, but for my ecosystem it doesn't have very reliable implementations
I'm not sure what your exact requirements or performance need,
but I did a project that also need reliable communication between client and server using websocket, the simplest I could think of was build request-response mechanism on top of websocket, and then build your application data on top of that.
here's high level overview how I implemented it:
implement request-response message using transaction to identify which response belongs to which request.
clients will have map storing transaction, when you send the message request wait for server to send a message response with the same transaction or wait
until timeout.
client wants to send message to server and construct the request as follow
{
"event": "sendMessage",
"type": "request",
"transaction": "<uuid/unique-value>",
"data": "<your-application-data>"
}
server parse the message and check that its a request with an event name sendMessage then call related function
server sends back response message to client
{
"event": "sendMessage",
"type": "response",
"transaction": "<uuid/unique-value>", // same unique value as in request
"data": "<your-application-data-result>"
}
because client has mapping which transaction belongs to which request, it is possible to match which request this response belongs to, if matched then complete the transaction
When I connect to the socket server from the client side, which is considered react, every few seconds a repeated request is sent by the socket client. Generally, the requests are of get type and most of the time they are in pending mode. Sometimes the result of requests is 2.
What do you think is the problem of sending repeated requests after connecting or doing anything with the socket?
UPDATE
This problem occurs when I use namespace . I tried all the solutions but this problem was not solved.
image
This is expected behavior when the option used for transport is polling (long-polling).
What happens is, by default, the transport parameter is ["polling", "websocket"] (client, server), where the sequence of elements matters. So, the first connection attempt is made via polling (which is faster to start compared to websocket), and then (or in parallel, I don't know the working details) there is a connection attempt by websocket (this takes a little longer to establish but is faster for later communication).
If the websocket connection is successfully established, the communication will be carried in this way. But if an error occurs, or the connection takes a long time to be established, or this transport option is not present in the instance's parameters, then the communication will continue being carried out through polling, which are the various requests that remain pending. It is normal for them to remain pending, so they receive an update and are able to inform the requester immediately, without the need for several quick requests consulting the application's status.
Check the instance parameters you set for this connection to find out if transport via websocket is enabled. Be careful when using the socket server behind a reverse proxy, as this reverse proxy needs to be properly configured to accept websocket connections, otherwise it won't work.
You can check the websocket requests in the browser inspection, Network tab, by enabling the WS filter.
Here are some additional links for you to read more about:
https://socket.io/docs/v4/how-it-works/
https://socket.io/docs/v4/using-multiple-nodes/
https://socket.io/docs/v4/reverse-proxy/
https://ably.com/blog/websockets-vs-long-polling
I am working on NodeJS. I have a doubt that if nodejs receives many requests, it processes them one after the other in queue. But, if it receives n requests for example say 4 requests reached nodejs at same time without any gap in time, then which one will nodejs pick first to serve? What is the criteria and reason to select any request from many requests at same time?
Since all four requests arrive on the same physical internet connection, one of the request's packets will get there before the others. As the packets converge on the last router before your server, one of them will get processed by a router slightly before the other and that packet will arrive at your server before the other. That packet will then get to the TCP stack in the OS first which will notify node.js about it first. Nodejs will start to process that first request. Since the main thread in nodejs is single threaded, if the request handler doesn't call something asynchronous, then it will send a response for the first request before it gets to start processing the second request.
If the first request has non-blocking, asynchronous portions to its request handling code, then as soon as it makes an asynchronous call and returns control back to the nodejs event loop, then the 2nd request will get to start processing.
But, if it receives n requests for example say 4 requests reached nodejs at same time without any gap in time, then which one will nodejs pick first to serve?
This is not possible. As the packets from each of the requests converge on the last router before your server, they will eventually get sequenced one after the other on the ethernet connection connected to your server. The ethernet connection doesn't send 4 requests in parallel. It sends packets one after the other.
So, your server will see one of the incoming packets before the others. Also, keep in mind an incoming http request is not just a single packet. It consists of establishing a TCP connection (with the back and forth that that entails) and then the client sends the actually http request over the TCP connection that has been established. If you're using https, there is even more involved in establishing the connection. So, the whole notion of four incoming connections arriving at exactly the same moment is not possible. Even if it were (imagine you had four network cards with four physical connections to the internet), the underlying operating system is going to end up servicing one of the incoming network cards before the others. Whether it's a hardware interrupt at the lowest level or a polling loop, one of the network cards is going to be found to have incoming data before the others.
What is the criteria and reason to select any request from many requests at same time?
It doesn't work that way. The OS doesn't suddenly realize it has four requests that arrived at exactly the same moment and then it has to implement some algorithm to choose which request to serve first. It doesn't work that way. Instead, some low level hardware/software element (probably in an upstream router) will have forced the incoming packets into an order (either based on minute timing or just based on how it's software works - like it checks hardware portA and then hardware portB and then hardware portC, for example) and one will physically arrive before the other on your server. This is not something your server gets to decide.
To respond a http request, we can just use return "content" in the method function.
But for some mission-critical use cases, I would like to make sure the http
200 OK response was delivered. Any idea?
The HTTP protocol doesn't work that way. If you need an acknowledgement then you need the client to send the acknowledgement to you.
Or you should look at implementing a bi-direction socket (a sample library is socket.io) where the client can send the ACK. If it is mission critical, then don't let it be on just http, use websockets
Also you can use AJAX callbacks to gather acknowledgment. One way of creating such a solution would be UUID generated for every request and returned as a part of header
$ curl -v http://domain/url
....
response:
X-ACK-Token: 89080-3e432423-234234-23-42323
and then client make a call again
$ curl http://domain/ack/89080-3e432423-234234-23-42323
So the server would know that the given response has been acknowledge by the client. But you cannot enforce automatic ACK, it is still on the client to send it, if they don't, you have no way of knowing
PS: The UUID is not an actual UUID here, just for example shared as random number
Take a look at Microsofts asynchronous server socket.
An asynchronous server socket requires a method to begin accepting connection requests from the network, a callback method to handle the connection requests and begin receiving data from the network, and a callback method to end receiving the data (this is where your client could respond with the success or failure of the HTTP request that was made).
Example
It is not possible with HTTP, if for some reason you can't use Sockets because your implementation requires HTTP (like an API) you must acknowledge a timeout strategy with your client.
It depends on how much cases you want to handle, but for example you can state something like this:
Client generate internal identifier and send HTTP request including that "ClientID" (like a timestamp or a random number) either in the Headers or as a Body parameter.
Server responds 200 OK (or error, does not matter)
Client waits for server answer 60 seconds (you define your maximum timeout).
If it receives the response, handle it and finish.
If it does NOT receive the answer, try again after the timeout including the same "ClientID" generated in the step 1.
Server detects that the "ClientID" was already received.
Either return 409 Conflict informing that it "Already exists" and the client should know how to handle it.
Or just return 200 OK and the client never knew that it was received the first time.
Again, this depends a lot on your business / technical requirements. Because you could even get two or more consecutive loops of timeout handle.
Hope you get an idea.
as #tarun-lalwani already written is the http protocol not designed for that. What you can do is to let the app create a file and your program checks after the 200 respone the existence and the time of the remote file. This have the implication that every 200 response requires another request for the check file
I am creating a long-polling chat application on nodeJS without using Socket.io and scaling it using clusters.
I have to find a way to store all the long-polled HTTP requests and response objects in such a way that it is available across all node clusters(so that when a message is received for a long-polled request, I can get that request and respond to it)
I have tried using redis, however, when I stringify http request and response objects, I get "Cannot Stringify Cyclic Structure" Error.
Maybe I am approaching it in a wrong way. In that case, how do we generally implement lon-polling across different clusters?
What you're asking seems to be a bit confused.
In a long-polling situation, a client makes an http request that is routed to a specific HTTP server. If no data to satisfy that request is immediately available, the request is then kept alive for some extended period of time and either it will eventually timeout and the client will then issue another long polling request or some data will become available and a response will be returned to the request.
As such, you do not make this work in clusters by trying to centrally save request and response objects. Those belong to a specific TCP connection between a specific server and a specific client. You can't save them and use them elsewhere and it also isn't something that helps any of this work with clustering either.
What I would think the clustering problem you have here is that when some data does become available for a specific client, you need to know which server that client has a long polling request that is currently live so you can instruct that specific server to return the data from that request.
The usual way that you do this is you have some sort of userID that represents each client. When any client connects in with a long polling request, that connection is cluster distributed to one of your servers. That server that gets the request, then writes to a central database (often redis) that this userID userA is now connected to server12. Then, when some data becomes available for userA, any agent can lookup that user in the redis store and see that the user is currently connected to server12. So, they can instruct server12 to send the data to userA using the current long polling connection for userA.
This is just one strategy for dealing with clustering - there are many others such as sticky load balancing, algorithmic distribution, broadcast distribution, etc... You can see an answer that describes some of the various schemes here.
If you are sure you want to store all the request and responses, have a look at this question.
Serializing Cyclic objects
you can also try cycle.js
However, I think you would only be interested in serializing few elements from request/response. An easier (probably better too) approach would be to just copy the required key/value pairs from request/response object in to a separate object and store them.