Does NodeJS WebSockets (ws) module implement backpressure? - node.js

I'm implementing a WebSockets server on NodeJS using the ws module. The server should send once per minute an update to all clients. I have this already implemented, but I have some concerns about its functionality in conditions where client connections can stall.
I'm concerned of what happens when a connection to the client becomes inactive, for example due to network connection breaking in a way that doesn't send a TCP RST or FIN.
I'm somewhat surprised that the send() method is not called with the await keyword in an async method. Does the send() method just queue all data to be sent? What if the socket buffers become full, can send() block in a way that causes starvation of other clients than the blocked one?
If send() never blocks, what happens if the data is queued and queued and queued...? Can it use an ever-increasing unbounded amount of memory?
Ideally, I would like to omit sending a once-per-minute update if the last update hasn't been fully sent. Can I achieve this with the ws module?

I'm concerned of what happens when a connection to the client becomes
inactive, for example due to network connection breaking in a way that
doesn't send a TCP RST or FIN.
If the connection is lost in this way (perhaps by the client system being switched off or physically disconnected) then TCP at the server will detect the broken connection because it will receive no acknowledgements of sent data. It can take a couple of minutes for TCP to give up, but in this case that doesn't sound like a big problem.
The worst-case scenario is when the client system remains connected but the client process ceases to read data from the connection. In that case sent data will accumulate at the client until the client's socket receive buffer fills, and then sent data will accumulate at the server -- first in the in-kernel socket send buffer, and then in server process memory.
I'm somewhat surprised that the send() method is not called with the
await keyword in an async method.
ws predates async/await and promises by years. I imagine that the API will eventually be retrofitted, but it hasn't happened yet.
Does the send() method just queue
all data to be sent? What if the socket buffers become full, can
send() block in a way that causes starvation of other clients than the
blocked one?
WebSocket.send ends up calling the built-in Net module's Socket.write. (See the sendFrame function at bottom of https://github.com/websockets/ws/blob/master/lib/sender.js for that call, and see https://nodejs.org/docs/latest-v8.x/api/net.html#net_class_net_socket for documentation of the Socket class.)
Socket.write will buffer data in the user process if the kernel can not immediately accept the data. Data is buffered separately per-Socket, so typically this buffering will not affect transmission on other Sockets connected to other clients. However, there's no bound on the amount of data one Socket will buffer. In the extreme case one Socket's buffered data could consume all of the server process's memory, and the resulting server crash would interfere with data delivery to all clients.
There are several ways to avoid this problem. Two easy methods that spring to mind are:
provide a completion callback argument to the send call. That callback will be passed on to the Socket.write call, which will fire the callback when all of that write's data has been written into the kernel. If your server refrains from sending more data to this client until the callback fires, the amount of data buffered in user space for that connection will be limited to something close to the size of the most recent send. (It won't be precisely that size because the buffered data will include WebSocket framing, plus SSL framing and padding if your connection is encrypted, on top of the original data passed to send.) Or
examine the bufferSize property of the connection's Socket before preparing to send data on that connection. bufferSize indicates the amount of data that is currently buffered in user space for that Socket. If it's non-zero, skip the send for that client.

Related

TCP sockets connection becomes unreliable with SO_REUSEADDR

I have an app where a single client talks to a single server. Normally, the client does a single connect, and then calls send repeatedly, and there's no problem.
However, I need to do a version where the client sets up a connection for each individual send (a bit like HTTP with and without keep-alive). In this version, the client calls socket, connect, send once, and then close.
The problem with this is that I very quickly run out of ephemeral client ports, and the connect fails. To get around this I call setsockopt with SO_REUSEADDR, and then bind to port 0, before calling connect (see here, for example).
This works, except that the TCP connection is no longer reliable. I get occasional incorrect data, presumably because there's still data around when the TCP connection is closed.
Is there any way to make this reliable (and fast)? shutdown before close doesn't help. Maybe I can get select to tell me if the socket is ready for output, but that seems like overkill.
Do you have to use TCP? If so, you will probably have to maintain an open connection and route your messages over that one connection.
There is SCTP, which may be a good fit for your use case - a reliable datagram protocol:
Like TCP, SCTP provides reliable, connection oriented data delivery with congestion control. Unlike TCP, SCTP also provides message bound‐ ary preservation, ordered and unordered message delivery, multi-stream‐ ing and multi-homing. Detection of data corruption, loss of data and duplication of data is achieved by using checksums and sequence numbers. A selective retransmission mechanism is applied to correct loss or corruption of data.

How does one handle slow client connections in node.js?

Node.js is single threaded. If a slow client is making a request, I imagine it could block the thread until it completes. Does that make sense ?
How does one handle slow connections ? Does it make sense to just terminate a connection if it takes too long ? How does one determine this? How does one measure how long the request is taking, and terminate it if it is taking too long ? I'm not referring to the duration it takes to send back a response. I'm just referring the time it takes for node to receive all the data required to process a request. Is this a legitimate scenario ?
I imagine there must be some way to do this, otherwise it would be really easy to DOS attack a node.js server...
EDIT: In a post request, the data comes in, in chunks. So what if it just comes in slowly ? I'm not sure how to simulate this. But if this is a problem in node, it could equally be a problem in PHP etc, because you would just need to spawn many connection, all of which are very slow to attack a server.
It doesn't matter if client request data comes in slowly. I/O in node is asynchronous and non-blocking. So if a chunk of data isn't available on a socket for a long time, node can do other things in the meantime, such as receive chunks of data from other sockets.
You can set an inactivity timeout that fires when no data is seen on the socket for the desired length of time. For HTTP, you can set a global request timeout via server.setTimeout() that can automatically close the underlying socket or if you pass in a callback, you can handle the timeout however you want. For TCP, there is socket.setTimeout().

How does an asynchronous socket server work?

I should state that I'm not asking about specific implementation details (yet), but just a general overview of what's going on. I understand the basic concept behind a socket, and need clarification on the process as a whole. My (probably very wrong) understanding is currently this:
A socket is constantly listening for clients that want to connect (in its own thread). When a connection occurs, an event is raised that spawns another thread to perform the connection process. During the connection process the client is assigned it's own socket in which to communicate with the server. The server then waits for data from the client and when data arrives an event is raised which spawns a thread to read the data from a stream into a buffer.
My questions are:
How off is my understanding?
Does each client socket require it's own thread to listen for data on?
How is data routed to the correct client socket? Is this something taken care of by the guts of TCP/UDP/kernel?
In this threaded environment, what kind of data is typically being shared, and what are the points of contention?
Any clarifications and additional explanation would be greatly appreciated.
EDIT:
Regarding the question about what data is typically shared and points of contention, I realize this is more of an implementation detail than it is a question regarding general process of accepting connections and sending/receiving data. I had looked at a couple implementations (SuperSocket and Kayak) and noticed some synchronization for things like session cache and reusable buffer pools. Feel free to ignore this question. I've appreciated all your feedback.
One thread per connection is bad design (not scalable, overly complex) but unfortunately way too common.
A socket server works more or less like this:
A listening socket is setup to accept connections, and added to a socketset
The socket set is checked for events
If the listening socket has pending connections, new sockets are created by accepting the connections, and then added to the socket set
If a connected socket has events, the relevant IO functions are called
The socket set is checked for events again
This happens in one thread, you can easily handle thousands of connected sockets in a single thread, and there's few valid reasons for making this more complex by introducing threads.
while running
select on socketset
for each socket with events
if socket is listener
accept new connected socket
add new socket to socketset
else if socket is connection
if event is readable
read data
process data
else if event is writable
write queued data
else if event is closed connection
remove socket from socketset
end
end
done
done
The IP stack takes care of all the details of which packets go to what "socket" in which order. Seen from the applications point of view, a socket represents a reliable ordered byte stream (TCP) or an unreliable unordered sequence of packets(UDP)
EDIT: In response to updated question.
I don't know either of the libraries you mention, but on the concepts you mention:
A session cache typically keeps data associated with a client, and can reuse this data for multiple connections. This makes sense when your application logic requires state information, but it's a layer higher than the actual networking end. In the above sample, the session cache would be used by the "process data" part.
Buffer pools are also an easy and often effective optimization of a high-traffic server. The concept is very easy to implement, instead of allocating/deallocating space for storing data you read/write, you fetch a preallocated buffer from a pool, use it, then return it to a pool. This avoids the (sometimes relatively expensive) backend allocation/deallocation mechanisms. This is not directly related to networking, you can just as well use buffer pools for e.g. something that reads chunks of files and process them.
How off is my understanding?
Pretty far.
Does each client socket require it's own thread to listen for data on?
No.
How is data routed to the correct client socket? Is this something taken care of by the guts of TCP/UDP/kernel?
TCP/IP is a number of layers of protocol. There's no "kernel" to it. It's pieces, each with a separate API to the other pieces.
The IP Address is handled in on place.
The port # is handled in another place.
The IP addresses are matched up with MAC addresses to identify a particular host. The port # is what ties a TCP (or UDP) socket to a particular piece of application software.
In this threaded environment, what kind of data is typically being shared, and what are the points of contention?
What threaded environment?
Data sharing? What?
Contention? The physical channel is the number one point of contention. (Ethernet, for example depends on collision-detection.) After that, well, every part of the computer system is a scarce resource shared by multiple applications and is a point of contention.

Reusing a port number in a UDP

In ASIO, s it possible to create another socket that has the same source port as another socket?
My UDP server application is calling receive_from using port 3000. It passes the packet
off to a worker thread which will send the response (currently using a dynamic source port).
The socket in the other thread is created like this:
udp::socket sock2(io_service, udp::endpoint(udp::v4(), 0));
And responds to the original request using the sender_endpoint saved with the original packet.
What I'd like to be able to do is respond to the client using the same source port as the server is listening on. But I can't see how that can be done. I get an exception if I try that saying address in use. Is it possible to do what I'm asking? The reason I want that is if I use dynamic ports, it means the clients need to add special firewall rules in windows to allow the reply packets to be read. I've found that if the source port is the same in the reply, windows firewall will allow it to pass back in.
The exception tells you as it is: you can't create two live sockets with the same source port. I don't know ASIO, but you should be able to create the socket before spinning off the thread, keeping reference to the socket and the thread for later use, and once the data sending thread is idle, joining back to it and sending any other stuff.
EDIT: with a little bit of effort, you can also make a socket for which you don't have to wait until the entire data from one thread has been sent: have a worker thread owning the socket listen on a queue for chunks of data (ideally exactly the size of the payload you intend to send) and send arbitrary chunks of payload to this queue, from multiple threads.
You should be able to use the SO_REUSEADDR socket option to bind multiple sockets to the same address. But having said that, you don't want to do this because it's not specified which socket will receive incoming data on that port (you would have to check all sockets for incoming data)
The better option is just to use the same socket to send replies - this can safely be done from multiple threads without any additional synchronisation (as you are using UDP).
send reply to the same socket (that you received client's request on) instead of creating new one
but make sure you don't send to the same socket from both threads simultaneously

How the buffering work in socket on linux

How does buffering work with sockets on Linux?
i.e. if the server does not read the socket and the client keeps sending data.
So what will happen? How big is the socket's buffer? And will the client know so that it will stop sending?
For UDP socket client will never know - the server side will just start dropping packets after the receive buffer is filled.
TCP, on the other hand, implements flow control. The server's kernel will gradually reduce the window, so the client will be able to send less and less data. At some point the window will go down to zero. At this point the client fills up its send buffer and receives an error from the send(2).
TCP sockets use buffering in the protocol stack. The stack itself implements flow control so that if the server's buffer is full, it will stop the client stack from sending more data. Your code will see this as a blocked call to send(). The buffer size can vary widely from a few kB to several MB.
I'm assuming that you're using send() and recv() for client and server communication.
So, send() will return the number of bytes that have been sent out. This doesn't necessarily equal to to the number of bytes you wanted to send out, so it's up to you to realise this and send the rest.
Now, the recv() returns the number of bytes read to the buffer. So if recv returns a 0, then the server has probably closed the connection.

Resources