Calculating the Ping of a WebSocket Connection? - node.js

small question. How can I calculate the ping of a WebSocket connection?
The server is set up using Node.js and node-websocket-server, if that matters at all.

There is few ways. One that is offered by Raynos - is wrong. Because client time and server time are different, and you cannot compare them.
Solution with sending timestamp is good, but it has one issue. If server logic does some decisions and calculations based on ping, then sending timestamp, gives risk that client software or MITM will modify timestamp, that way it will give another results to server.
Much better way, is sending packet to client with unique ID, that is not increment number, but randomized. And then server will expecting from client "PONG" message with this ID.
Size of ID should be same, I recommend 32bits (int).
That way server sends "PING" with unique ID and store timestamp of the moment message sent, and then waiting until it will receive response "PONG" with same ID from Client, and will calculate latency of round-trip based on stored timestamp and new one on the moment of receiving PONG message.
Don't forget to implement case with timeout, to prevent lost PING/PONG packet stop the process of checking latency.
As well WebSockets has special packet opcode called PING, but example from post above is not using this feature. Read this official document that describes this specific opcode, it might be helpful if you are implementing your own WebSockets protocol on server side: https://www.rfc-editor.org/rfc/rfc6455#page-37

To calculate the latency you really should complete the round-trip. You should have a ping message that has a timestamp in it. When one side or the other receives a ping it should change it to a pong (or gnip or whatever) but keep the original timestamp and send it back to the sender. Then the original sender can compare the timestamp to the current time to see what the roundtrip latency is. If you need the one way latency divide by 2. The reason you need to do it this way is that without some very sophisticated time skew algorithms, the time on one host vs another is not going to be comparable at small time deltas like this.

Websockets have a ping type message which the server can respond to with a pong type message. See this for more info about websockets.

You can send a request over the web socket with Date.now() as data and compare it to Date.now() on the server.
This gives you the time difference between sending the packet and receiving it plus any handling time on either end.

Related

How to measure Websocket backpressure or network buffer from client

I am using the ws Node.js package to create a simple WebSocket client connection to a server that is sending hundreds of messages per second. Even with a simple onMessage handler that just console.logs incoming messages, the client cannot keep up. My understanding is that this is referred to as backpressure, and incoming messages may start piling up in a network buffer on the client side, or the server may throttle the connection or disconnect all-together.
How can I monitor backpressure, or the network buffer from the client side? I've found several articles speaking about this issue from the perspective of the server, but I have no control over the server and need to know just how slow is my client?
So you don't have control over the server and want to know how slow your client is.(seems like you already have read about backpressure). Then I can only think of using a stress tool like artillery
Check this blog, it might help you setting up a benchmarking scenario.
https://ma.ttias.be/benchmarking-websocket-server-performance-with-artillery/
Add timing metrics to your onMessage function to track how long it takes to process each message. You can also use RUM instrumentation like from the APM providers -- NewRelic or Appdynamics for paid options or you could use free tier of Google Analytics timing.
If you can, include a unique identifier for correlation between the client and server for each message sent.
Then you can correlate for a given window how long a message took to send from the server and how long it spent being processed by the client.
You can't get directly to the network socket buffer associated with your websocket traffic since you're inside the browser sandbox. I checked the WebSocket APIs and there's no properties that expose receive buffer information.
If you don't have control over the server, you are limited. But you could try some client tricks to simulate throttling.
This heavily assumes you don't mind skipping messages.
One approach would be to enable the socket, start receiving events and set your own max count in a in-memory queue/array. Once you reach a full queue, turn off the socket. Process enough of the queue, then enable the socket again.
This has high cost to disable/enable the socket, as well as the loss of events, but at least your client will not crash.
Once your client is not crashing, you can put some additional counts on timestamp and the queue size to determine the threshold before the client starts crashing.

What is the right way to measure time between bot's message and user's answer in the bot framework?

I have a quiz bot where the person needs to answer within 10 seconds. I am using the bot framework where I measure the timestamp when the bot sends the message and I record another timestamp when the user's answer is received inside the dialog. I however feel this approach is flawed as it doesnt take network latency into account. The timestamp while sending the message if I am not mistaken is the server timestamp and the timestamp while receiving the message is also the time at which the server received the message.
Total time difference = server's timestamp + send delay + user delay + receive delay - server's timestamp on receive is the formula If I am not mistaken.
What is the right way for me to enforce a 10 second constraint on the user
I would recommend you to keep your approach as otherwise your quiz will be easy-hackable.
Let me describe. If you somehow send timestamp from client-side, user will be able to easily edit that timestamp (even using inspector tools inside modern browsers) and then send you fake timestamp. So, he will be able to win easily.
Also you can combine both approaches by sending timestamp from client side and comparing it with the timestamp of message receivement on server side. And if the difference is not big enough (assuming it's not hacked), then use client-side's timestamp, otherwise - use the server-side timestamp and punish the user :)
If you do not want to consider the total time of server's timestamp + send delay + user delay then, you could consider the time when the message is delivered to the user. So now you could calculate the timestamp between message delivered and users response.
Kindly follow the below link:
https://developers.facebook.com/docs/messenger-platform/webhook-reference/message-delivered

Socket.IO confirmed delivery

Before I dive into the code, can someone tell me if there is any documentation available for confirmed delivery in Socket.IO?
Here's what I've been able to glean so far:
A callback can be provided to be invoked when and if a message is acknowledged
There is a special mode "volatile" that does not guarantee delivery
There is a default mode that is not "volatile"
This leaves me with some questions:
If a message is not volatile, how is it handled? Will it be buffered indefinitely?
Is there any way to be notified if a message can't be delivered within a reasonable amount of time?
Is there any way to unbuffer a message if I want to give up?
I'm at a bit of a loss as to how Socket.IO can be used in a time sensitive application without falling back to volatile mode and using an external ACK layer that can provide failure events and some level of configurability. Or am I missing something?
TL;DR You can't have reliable confirmed delivery unless you're willing to wait until the universe dies.
The delivery confirmation you seek is related to the theoretical Two Generals Problem, which is also discussed in this SO answer.
TCP manages the reliability problem by guaranteeing delivery after infinite retries. We live in a finite universe, so the word "guarantee" is theoretically dubious :-)
Theory aside, consider this: engine.io, the underpinnings of socket.io 1.x, uses the following transports:
WebSocket
FlashSocket
XHR polling
JSONP polling
Each of those transports is based upon TCP, and TCP is reliable. So as long as connections stay connected and transports don't change, each individual socket.io message or event should be reliable. However, two things can happen on the fly:
engine.io can change transports
socket.io can reconnect in case the underlying transport disconnects
So what happens when a client or your server squirts off a few messages while the plumbing is being fiddled with like that? It doesn't say in either the engine.io protocol or the socket.io protocol (at versions 3 and 4, respectively, as of this writing).
As you suggest in your comments, there is some acknowledgement logic in the implementation. But even simple digital communications has notrivial behavior, so I do not trust an unsupervised socket.io connection for reliable delivery for mission- or safety-critical operations. That won't change until reliable delivery is part of their protocol and their methods have been independently and formally verified.
You're welcome to adopt my policies:
Number my messages
Ask for a resend when in doubt
Do not mutate my state - client or server - unless I know I'm ready
In Short:
Guaranteed message delivery acknowledgement is proven impossible, but TCP guarantees delivery and order given "infinite" retries. I'm less confident about socket.io messages, but they're really powerful and easy to use so I just use them with care.
I ensured delivery using different strategies
I send data using socket including nonce in the message to prevent repeated message errors
The other party sends a confirmation of recived meassage or i resend after x seconds
I used a REST call by the client every 30 seconds to request all new messages sent by server to catch any dropped messages during transport

a UDP socket based rateless file transmission

I'm new to socket programming and I need to implement a UDP based rateless file transmission system to verify a scheme in my research. Here is what I need to do:
I want a server S to send a file to a group of peers A, B, C.., etc. The file is divided into a number of packets. At the beginning, peers will send a Request message to the server to initialize transmission. Whenever S receives a request from a client, it ratelessly transmit encoded packets(how to encode is done by my design, the encoding itself has the erasure-correction capability, that's why I can transmit ratelessly via UDP) to that client. The client keeps collecting packets and try to decode them. When it finally decodes all packets and re-construct the file successfully, it sends back a Stop message to the server and S will stop transmitting to this client.
Peers request the file asynchronously (they may request the file at different time). And the server will have to be able to concurrently serve multiple peers. The encoded packets for different clients are different (they are all encoded from the same set source packets, though).
Here is what I'm thinking about the implementation. I have not much experience with unix network programming though, so I'm wondering if you can help me assess it, and see if it is possible or efficient.
I'm gonna implement the server as a concurrent UDP server with two socket ports(similar to TFTP according to the UNP book). One is to receive controlling messages, as in my context it is for the Request and Stop messages. The server will maintain a flag (=1 initially) for each request. When it receives a Stop message from the client, the flag will be set to 0.
When the serve receives a request, it will fork() a new process that use the second socket and port to send encoded packets to the client. The server keeps sending packets to the client as long as the flag is 1. When it turns to 0, the sending ends.
The client program is easy to do. Just send a Request, recvfrom() the server, progressively decode the file and send a Stop message in the end.
Is this design workable? The main concerns I have are: (1), is that efficient by forking multiple processes? Or should I use threads? (2), If I have to use multiple processes, how can the flag bit be known by the child process? Thanks for your comments.
Using UDB for file transfer is not best idea. There is no way for server or client to know if any packet has been lost so you would only know that during reconstruction assuming you have some mechanism (like counter) to detect lost packes. It would then be hard to request just one of those packets that got lost. And in the end you would have a code that would do what TCP sockets do. So I suggest to start with TCP.
Typical design of a server involves a listener thread that spawns a worker thread whenever there is a new client request. That new thread would handle communication with that particular client and then end. You should keep a limit of clients (threads) that are served simultaneously. Do not spawn a new process for each client - that is inefficient and not needed as this will get you nothing that you can't achieve with threads.
Thread programming requires carefulness so do not cut corners. Otherwise you will have hard time finding and diagnosing problems.
File transfer with UDP wil be fun :(
Your struct/class for each message should contain a sequence number and a checksum. This should enable each client to detect, and ask for the retransmission of, any missing blocks at the end of the transfer.
Where UDP might be a huge winner is on a local LAN. You could UDP-broadcast the entire file to all clients at once and then, at the end, ask each client in turn which blocks it has missing and send just those. I wish Kaspersky etc. would use such a scheme for updating all my local boxes.
I have used such a broadcast scheme on a CANBUS network where there are dozens of microControllers that need new images downloaded. Software upgrades take minutes instead of hours.

Broadcasting Messages at High Frequency. Using HTTP POST or something else?

We're looking at speccing out a system which broadcasts small amounts of frequently changing data (using JSON or XML or something) to multiple recipients at a reasonably high frequency (our updates will be 1000s per second).
We were initially thinking of using HTTP POST to broadcast the data to each endpoint, maybe once every few seconds (the clients will vary as they're other people's webapps), but we're now wondering if there's a better way to hold up to the load/frequency we're hoping. I imagine we'd need to version/timestamp the messages in some way at the very least.
We're using RabbitMQ for preparing all the things ready for sending and to choose what needs to go where (from a Django app, if that matters), but we can't get all of the endpoints to use a MQ.
The HTTP POST thing just doesn't seem quite right. What else should we be looking in to? Is this where things like node or socket.io or some of the new real time frameworks fit in? We're happy to find the right expertise to help with this, just need steering the correct direction.
Thanks!
You don't want to do thousands of POSTs per second to multiple clients. You're going to introduce the HTTP overhead on your end pushing it out, and for all you know, you might end up flooding the server on the other end with POSTs that just swamp it.
Option 1: For clients that can't or won't read a queue, POSTS could work, but to avoid killing the server and all the HTTP overhead, could you bundle updates? Once every minute or two, take all the aggregate data and then post it to the client? This way, you don't have 60+ POST requests going to one client every minute or two for time and eternity. It'll help save on bandwidth as well, since you only send all the header info once with more data instead of sending all the header information and pieces of data.
Option 2: Have you thought about using a good 'ole socket connection? Either you open a socket to the client, or vice versa, and push the data over that? That avoids the overhead of HTTP and lets the client read at the rate data arrives. If the client no longer wants to receive data, they can just close the connection. It's on the arcane side, but it'd avoid completely killing the target server.
If you can get clients to read a MQ, set up a group just for them and make your life easier so you only have to deal with those that can't or won't read the queue instead of trying for a one size fits all solution.

Resources