I'm developing server application and have following problem with sending data back to client, that suddenly terminates the connection.
When I call send on blocking socket with write timeout set via setsockopt(SO_SNDTIMEO) and client disconnects during sending (i.e a few bytes are sent, then client properly terminates TCP - as can be seen in wireshark), send still blocks, until send timeout elapses. Following call to send returns error as expected.
I would expect that TCP termination (FIN/ACK) will cause blocking send to return immediately, not after timeout.
Have someone ever seen such behaviour? Is it normal?
No, the FIN sent from the client does not unblock the send() at the server. When the client calls close() the FIN is sent to server and the connection is closed for the direction from client to server. The direction from server to client is still open till the server calls close() and FIN is sent to the client.
When the client sends FIN and the server sends ACK back then connection on server is in the CLOSE_WAIT state and the connection on client is in the FIN_WAIT_2 state. Server still may send data and client still may receive data till server close the connection.
The close connection cannot be detected through send(). Only recv() detects the connection closed by peer.
If your code must execute an immediate action when client close the connection then it must call poll() or select() and use non-blocking send() and recv() calls.
Related
I used SSE for push notification. But i can't get an error or a close event in the server when the client's wifi/mobile data unconditionally disconnect without closing the connection in the right way.
It always see the client as online for about 15 minutes before getting connection closed message.
I used regular implementation of SSE in nodejs and express.
Is there any way to check response.write() whether the message delivered to the user or not?
To get a socket error you need the server to attempt to write data; there is no other way to detect when the connection has dropped except to try using it.
As described in chapter 5 of Data Push Apps with HTML5 SSE, what I do is: a) have the server send out a keep-alive message every e.g. 30 seconds. I normally have the message just be the current datestamp, but it could be anything; b) have the client disconnect and reconnect if it hasn't received any messages for 45 seconds.
I have two browser socket.io clients. For example. Client A and Client B and They are connected to Room one.
Following behavior is happening on my computer (installed nodejs 5, socket.io 1.4.5):
We suppose that both of clients are connected to Room One. When Client A is disconnected (lose connection) and reconnect after timeout and Client B does not use emit for send message to Room one, then original socket of Client A will be closed with transfer reason: timeouted. Thats good for me. I need it this behavior.
On other side:
When Client A is disconnected (lose connection) and reconnect in timeout time and Client B during that disconnected time uses emit for send message to Room one, then original socket of Client A after reconnect will be immediately closed with transfer reason: closed and not timeouted.
It looks that there is some behavior that selects disconnected sockets in timeout time for that some other online socket try emit message to same room. When emited message cannot delivery to disconnected socket (on client side), then this socket is probably select for immediately close of transaction after reconnect in timeout time.
I noticed futher issue. When mobile client lost connecting and somebody send emit to room where mobile client was connected, then mobile socket is disconnected immediately without timeout with reason transport close on linux.
the situation above is different for mac system. The socket is disconnected either timeout is gone or immediately if client mobile is reconnected.
Can anybody help what is a reason that this behaviors on same version of node (6.2) and same version of socket (1.4.6) are different?
I have a NodeJS server that does asynchronous processing after receiving data from the client through a websocket. When it's finished processing, it sends data back to the client.
I've run into a case where the connection is closed abnormally while the server is still processing data. There's no way to interrupt it, and if it tries sending through a closed socket, my server crashes.
Worst case is I wrap the send() calls in a check that confirms the socket is open, but this feels dirty. Does the fact that it crashes indicate a bug in the node module I'm using (ws)? Am I misunderstanding the Websocket spec somehow?
So on linux, shutdown() can take a parameter SHUT_RD, SHUT_WR or SHUT_RDWR to shutdown only part of the communication channel. But in terms of the TCP messages sending to the peer, how does it work?
In TCP state machine, the closing works in a 4-way handshake fashion,
(1) (2)
FIN---------->
<----------ACK
<----------FIN
ACK----------->
So what messages dose it send when I do a shutdown(sock, SHUT_RD) or shutdown(sock, SHUT_WR)?
shutdown(sd, SHUT_WR) sends a FIN which the peer responds to with an ACK. Any further attempts to write to the socket will incur an error. However the peer can still continue to send data.
shutdown(sd, SHUT_RD) sends nothing on the network: it just conditions the local API to return EOS for any subsequent reads on the socket. The behaviour when receiving data on a socket that has been shutdown for read is system-dependent: Unix will ACK it and throw it away; Linux will ACK it and buffer it, which will eventually stall the sender; Windows will issue an RST, which the sender sees as 'connection reset by peer'.
The FIN packets don't have to be symmetric. Each end sends FIN when its local writer has closed the socket.
shutdown with write end send FIN and close will close the socket .so any attempt to send close socket will result RESET packet.
I met with intresting problem where
sender send:
Packet A send
Pakcet B send
shutdown with write FIN
shutdown read
close socket
but recevier receive out of order
Packet A
Pakcket containing FIN
Packet B
ACK from reciver ,this cause connection reset.
I have minimum 3 TCP client, each has a Thread. I'm sending out messages and waiting for the answer, but sometimes I have to wait to receive the response from all client, this is depending what kind of message sent the server out. I already made to send messages to the clients and receiving, but when I have to wait for the other client response I couldn't do that until now.
As far as you didn't mention your environment/language, I assume C#/.NET 4
You need a mechanism for each client to signal the arrival of a response. This is usually done with AutoResetEvents: Each client sends his response back to the server. The server itself can extract from the reponse (or any other property, e.g. the connection) with client has sent it. Then he sets the apporpriate AutoResetEvent.
The thread that formerly initiated sending the message can afterwards wait for all AutoResetEvents to be set.