winsock 2. thread safety for simultaneous send's. tcp - multithreading

is it possible to have multiple threads sending on the same socket? will there be interleaving of the streams or will the socket block on the first thread (assuming tcp)? the majority of opinions i've found seems to warn against doing this for obvious fears of interleaving, but i've also found a few comments that state the opposite. are interleaving fears a carryover from winsock1 and are they well-founded for winsock2? is there a way to setup a winsock2 socket that would allow for lack of local synchronization?
two of the contrary opinions below... who's right?
comment 1
"Winsock 2 implementations should be completely thread safe. Simultaneous reads / writes on different threads should succeed, or fail with WSAEINPROGRESS, depending on the setting of the overlapped flag when the socket is created. Anyway by default, overlapped sockets are created; so you don't have to worry about it. Make sure you don't use NT SP6, if ur on SP6a, you should be ok !"
source
comment 2
"The same DLL doesn't get accessed by multiple processes as of the introduction of Windows 95. Each process gets its own copy of the writable data segment for the DLL. The "all processes share" model was the old Win16 model, which is luckily quite dead and buried by now ;-)"
source
looking forward to your comments!
jim
~edit1~
to clarify what i mean by interleaving. thread 1 sends the msg "Hello" thread 2 sends the msg "world!". recipient receives: "Hwoel lorld!". this assumes both messages were NOT sent in a while loop. is this possible?

I'd really advice against doing this in any case. The send functions might send less than you tell it to for various very legit reasons, and if another thread might enter and try to also send something, you're just messing up your data.
Now, you can certainly write to a socket from several threads, but you've no longer any control over what gets on the wire unless you've proper locking at the application level.
consider sending some data:
WSASend(sock,buf,buflen,&sent,0,0,0:
the sent parameter will hold the no. of bytes actually sent - similar to the return value of the send()function. To send all the data in buf you will have to loop doing a WSASend until all all the data actually get sent.
If, say, the first WSASend sends all but the last 4 bytes, another thread might go and send something while you loop back and try to send the last 4 bytes.
With proper locking to ensure that can't happen, it should e no problem sending from several threads - I wouldn't do it anyway just for the pure hell it will be to debug when something does go wrong.

is it possible to have multiple threads sending on the same socket?
Yes - although, depending on implementation this can be more or less visible. First, I'll clarify where I am coming from:
C# / .Net 3.5
System.Net.Sockets.Socket
The overall visibility (i.e. required management) of threading and the headaches incurred will be directly dependent on how the socket is implemented (synchronously or asynchronously). If you go the synchronous route then you have a lot of work to manually manage connecting, sending, and receiving over multiple threads. I highly recommend that this implementation be avoided. The efforts to correctly and efficiently perform the synchronous methods in a threaded model simply are not worth the comparable efforts to implement the asynchronous methods.
I have implemented an asynchronous Tcp server in less time than it took for me to implement the threaded synchronous version. Async is much easier to debug - and if you are intent on Tcp (my favorite choice) then you really have few worries in lost messages, missing data, or whatever.
will there be interleaving of the streams or will the socket block on the first thread (assuming tcp)?
I had to research interleaved streams (from wiki) to ensure that I was accurate in my understanding of what you are asking. To further understand interleaving and mixed messages, refer to these links on wiki:
Real Time Messaging Protocol
Transmission Control Protocol
Specifically, the power of Tcp is best described in the following section:
Due to network congestion, traffic load balancing, or other unpredictable network behavior, IP packets can be
lost, duplicated, or delivered out of order. TCP detects these problems, requests retransmission of lost
packets, rearranges out-of-order packets, and even helps minimize network congestion to reduce the
occurrence of the other problems. Once the TCP receiver has finally reassembled a perfect copy of the data
originally transmitted, it passes that datagram to the application program. Thus, TCP abstracts the application's
communication from the underlying networking details.
What this means is that interleaved messages will be re-ordered into their respective messages as sent by the sender. It is expected that threading is or would be involved in developing a performance-driven Tcp client/server mechanism - whether through async or sync methods.
In order to keep a socket from blocking, you can set it's Blocking property to false.
I hope this gives you some good information to work with. Heck, I even learned a little bit...

Related

Is python3's BufferedProtocol an abstraction over TCP? Or is it so low-level that I have to implement the TCP things too?

I am referring to this: https://docs.python.org/3/library/asyncio-protocol.html#asyncio.BufferedProtocol
I haven't seen the answer to this question documented anywhere and I want to know the answer in advance of writing any code.
It seems to imply that it is a modification of asyncio.Protocol (for TCP) but seeing as though TCP is not mentioned for BufferedProtocol it's got me concerned that I'd have to contend with out of order packets etc.
Many thanks!
BufferedProtocol isn't a protocol based on TCP, it's an interface (base class) for custom implementation of asyncio protocols, specifically those that try to minimize the amount of copying. The docstring provides more details:
The idea of BufferedProtocol is that it allows to manually allocate and control the receive buffer. Event loops can then use the buffer provided by the protocol to avoid unnecessary data copies. This can result in noticeable performance improvement for protocols that receive big amounts of data. Sophisticated protocols can allocate the buffer only once at creation time.
Currently none of the protocols shipped with asyncio derive from BufferedProtocol, so the use case for this is user code that needs to achieve high throughput - see the BPO issue and the linked mailing list post for details.
seeing as though TCP is not mentioned for BufferedProtocol it's got me concerned that I'd have to contend with out of order packets etc.
Unless you are writing custom low-level asyncio code, you shouldn't care about BufferedProtocol at all. Regular asyncio TCP code calls functions such as open_connection or start_server, both of which handle provide a streaming abstraction on top of TCP sockets in the usual way (using a buffer, handling errors, etc.).
I can confirm - BufferedProtocol is for TCP only. Not for files or anything else. And it gives you a handle on a zero copy buffer to work with. That's basically all I wanted to know.

TCP close() vs shutdown() in Linux OS

I know there are already a lot similar questions in stackoverflow, but nothing seems convincing. Basically trying to understand under what circumstances I need to use one over the other or use both.
Also would like to understand if close() & shutdown() with shut_rdwr are the same.
Closing TCP connections has gathered so much confusion that we can rightfully say either this aspect of TCP has been poorly designed, or is lacking somewhere in documentation.
Short answer
To do it the proper way, you should use all 3: shutdown(SHUT_WR), shutdown(SHUT_RD) and close(), in this order. No, shutdown(SHUT_RDWR) and close() are not the same. Read their documentation carefully and questions on SO and articles about it, you need to read more of them for an overview.
Longer answer
The first thing to clarify is what you aim for, when closing a connection. Presumably you use TCP for a higher lever protocol (request-response, steady stream of data etc.). Once you decide to "close" (terminate) connection, all you had to send/receive, you sent and received (otherwise you would not decide to terminate) - so what more do you want? I'm trying to outline what you may want at the time of termination:
to know that all data sent in either direction reached the peer
if there are any errors (in transmitting the data in process of being sent when you decided to terminate, as well as after that, and in doing the termination itself - which also requires data being sent/received), the application is informed
optionally, some applications want to be non-blocking up to and including the termination
Unfortunately TCP doesn't make these features easily available, and the user needs to understand what's under the hood and how the system calls interact with what's under the hood. A key sentence is in the recv manpage:
When a stream socket peer has performed an orderly shutdown, the
return value will be 0 (the traditional "end-of-file" return).
What the manpage means here is, orderly shutdown is done by one end (A) choosing to call shutdown(SHUT_WR), which causes a FIN packet to be sent to the peer (B), and this packet takes the form of a 0 return code from recv inside B. (Note: the FIN packet, being an implementation aspect, is not mentioned by the manpage). The "EOF" as the manpage calls it, means there will be no more transmission from A to B, but application B can, and should continue to send what it was in the process of sending, and even send some more, potentially (A is still receiving). When that sending is done (shortly), B should itself call shutdown(SHUT_WR) to close the other half of the duplex. Now app A receives EOF and all transmission has ceased. The two apps are OK to call shutdown(SHUT_RD) to close their sockets for reading and then close() to free system resources associated with the socket (TODO I haven't found clear documentation taht says the 2 calls to shutdown(SHUT_RD) are sending the ACKs in the termination sequence FIN --> ACK, FIN --> ACK, but this seems logical).
Onwards to our aims, for (1) and (2) basically the application must somehow wait for the shutdown sequence to happen, and observe its outcome. Notice how if we follow the small protocol above, it is clear to both apps that the termination initiator (A) has sent everything to B. This is because B received EOF (and EOF is received only after everything else). A also received EOF, which is issued in reply to its own EOF, so A knows B received everything (there is a caveat here - the termination protocol must have a convention of who initiates the termination - so not both peers do so at once). However, the reverse is not true. After B calls shutdown(SHUT_WR), there is nothing coming back app-level, to tell B that A received all data sent, plus the FIN (the A->B transmission had ceased!). Correct me if I'm wrong, but I believe at this stage B is in state "LAST_ACK" and when the final ACK arrives (step #4 of the 4-way handshake), concludes the close but the application is not informed unless it had set SO_LINGER with a long-enough timeout. SO_LINGER "ON" instructs the shutdown call to block (be performed in the forground) hence the shutdown call itself will do the waiting.
In conclusion what I recommend is to configure SO_LINGER ON with a long timeout, which causes it to block and hence return any errors. What is not entirely clear is whether it is shutdown(SHUT_WR) or shutdown(SHUT_RD) which blocks in expectation of the LAST_ACK, but that is of less importance as we need to call both.
Blocking on shutdown is problematic for requirement #3 above where e.g. you have a single-threaded design that serves all connections. Using SO_LINGER may block all connections on the termination of one of them. I see 3 routes to address the problem:
shutdown with LINGER, from a different thread. This will of course complicate a design
linger in background and either
2A. "Promote" FIN and FIN2 to app-level messages which you can read and hence wait for. This basically moves the problem that TCP was meant to solve, one level higher, which I consider hack-ish, also because the ensuing shutdown calls may still end in a limbo.
2B. Try to find a lower-level facility such as SIOCOUTQ ioctl described here that queries number of unACKed bytes in the network stack. The caveats are many, this is Linux specific and we are not sure if it aplies to FIN ACKs (to know whether closing is fully done), plus you'd need to poll taht periodically, which is complicated. Overall I'm leaning towards option 1.
I tried to write a comprehensive summary of the issue, corrections/additions welcome.
TCP sockets are bidirectional - you send and receive over the one socket. close() stops communication in both directions. shutdown() provides another parameter that allows you to specify which direction you might want to stop using.
Another difference (between close() and shutdown(rw)) is that close() will keep the socket open if another process is using it, while shutdown() shuts down the socket irrespective of other processes.
shutdown() is often used by clients to provide framing - to indicate the end of their request, e.g. an echo service might buffer up what it receives until the client shutdown()s their send side, which tells the server that the client has finished, and the server then replies; the client can receive the reply because it has only shutdown() writing, not reading, through its socket.
Close will close both send and receving end of socket.If you want only sending part of socket should be close not receving part or vice versa you can use shutdown.
close()------->will close both sending and receiving end.
shutdown()------->only want to close sending or receiving.
argument:SHUT_RD(shutdown reading end (receiving end))
SHUT_WR(shutdown writing end(sending end))
SHUT_RDWR(shutdown both)

what is the side effect of setting tcp_max_tw_buckets to a very small value?

I know it is quite normal setting tcp_max_tw_buckets to a relatively small number such as 30000 or 50000, to avoid the situation when a host have a lots of time-wait state connections and application failed to open new one. It is something mentioned quite a lots. such as the question like this: How to reduce number of sockets in TIME_WAIT?
As before I know time-wait is a state to avoid TCP packets out of order, and it may be better using some other approach to coping it. And if you setting it to a small number thing may went wrong.
I feel I'm stucking at somewhere that I have to set tcp_max_tw_buckets to a small number, and don't know the specific scenarios I shall avoid it.
So my question is what is the side effect of setting tcp_max_tw_buckets to a very small value, and can I setup a specific scenario using lab environment, that a small number of tcp_max_tw_buckets will cause the trouble?
As you can see in this Kernel source, that option prevents graceful termination of the socket. In terms of the socket state, you have reduced the time wait duration for this connection to zero.
So what happens next? First off, you'll see the error message on your server. The rest is then a race condition for subsequent connections from your clients. Section 2 of rfc 1337 then covers what you may see. In short, some connections may show the following symptoms.
Corruption of your data stream (because the socket accepts an old transmission).
Infinite ACK loops (due to an old duplicate ACK being picked up).
Dropped connections (due to old data turning up in the SYN-SENT state).
However, proving this may be hard. As noted in the same RFC:
The three hazards H1, H2, and H3 have been demonstrated on a stock Sun OS 4.1.1 TCP running in an simulated environment that massively duplicates segments. This environment is far more hazardous than most real TCP's must cope with, and the conditions were carefully tuned to create the necessary conditions for the failures.
The real answer to your question is that the correct way to avoid TIME_WAIT states is to be the end that receives the first close.
In the case of a server, that means that after you've sent the response you should loop waiting for another request on the same socket, with a read timeout of course, so that it is normally the client end which will close first. That way the TIME_WAIT state occurs at the client, where it is fairly harmless, as clients don't have lots of outbound connections.

TCP Message framing + recv() [linux]: Good conventions?

I am trying to create a p2p applications on Linux, which I want to run as efficiently as possible.
The issue I have is with managing packets. As we know, there may be more than one packet in the recv() buffer at any time, so there is a need to have some kind of message framing system to make sure that multiple packets are not treated as one big packet.
So at the moment my packet structure is:
(u16int Packet Length):(Packet Data)
Which requires two calls to recv(); one to get the packet size, and one to get the packet.
There are two main problems with this:
1. A malicious peer could send a packet with a size header of
something large, but not send any more data. The application will
hang on the second recv(), waiting for data that will never come.
2. Assuming that calling Recv() has a noticeable performance penalty
(I actually have no idea, correct me if I am wrong) calling Recv() twice
will slow the program down.
What is the best way to structure packets/Recieving system for both the best efficiency and stability? How do other applications do it? What do you recommend?
Thankyou in advance.
I think your "framing" of messages within a TCP stream is right on.
You could consider putting a "magic cookie" in front of each frame (e.g. write the 32-bit int "0xdeadbeef" at the top of each frame header in addition to the packet length) such that it becomes obvious that your are reading a frame header on the first of each recv() pairs. It the magic integer isn't present at the start of the message, you have gotten out of sync and need to tear the connection down.
Multiple recv() calls will not likely be a performance hit. As a matter of fact, because TCP messages can get segmented, coalesced, and stalled in unpredictable ways, you'll likely need to call recv() in a loop until you get all the data you expected. This includes your two byte header as well as for the larger read of the payload bytes. It's entirely possible you call "recv" with a 2 byte buffer to read the "size" of the message, but only get 1 byte back. (Call recv again, and you'll get the subsequent bytes). What I tell the developers on my team - code your network parsers as if it was possible that recv only delivered 1 byte at a time.
You can use non-blocking sockets and the "select" call to avoid hanging. If the data doesn't arrive within a reasonable amount of time (or more data arrives than expected - such that syncing on the next message becomes impossible), you just tear the connection down.
I'm working on a P2P project of my own. Would love to trade notes. Follow up with me offline if you like.
I disagree with the others, TCP is a reliable protocol, so a packet magic header is useless unless you fear that your client code isn't stable or that unsolicited clients connect to your port number.
Create a buffer for each client and use non-blocking sockets and select/poll/epoll/kqueue. If there is data available from a client, read as much as you can, it doesn't matter if you read more "packets". Then check whether you've read enough so the size field is available, if so, check that you've read the whole packet (or more). If so, process the packet. Then if there's more data, you can repeat this procedure. If there is partial packet left, you can move that to the start of your buffer, or use a circular buffer so you don't have to do those memmove-s.
Client timeout can be handled in your select/... loop.
That's what I would use if you're doing something complex with the received packet data. If all you do is to write the results to a file (in bigger chunks) then sendfile/splice yields better peformance. Just read packet length (could be multiple reads) then use multiple calls to sendfile until you've read the whole packet (keep track of how much left to read).
You can use non-blocking calls to recv() (by setting SOCK_NONBLOCK on the socket), and wait for them to become ready for reading data using select() (with a timeout) in a loop.
Then if a file descriptor is in the "waiting for data" state for too long, you can just close the socket.
TCP is a stream-oriented protocol - it doesn't actually have any concept of packets. So, in addition to recieving multiple application-layer packets in one recv() call, you might also recieve only part of an application-layer packet, with the remainder coming in a future recv() call.
This implies that robust reciever behaviour is obtained by receiving as much data as possible at each recv() call, then buffering that data in an application-layer buffer until you have at least one full application-layer packet. This also avoids your two-calls-to-recv() problem.
To always recieve as much data as possible at each recv(), without blocking, you should use non-blocking sockets and call recv() until it returns -1 with errno set to EWOULDBLOCK.
As others said, a leading magic number (OT: man file) is a good (99.999999%) solution to identify datagram boundaries, and timeout (using non-blocking recv()) is good for detecting missing/late packet.
If you count on attackers, you should put a CRC in your packet. If a professional attacker really wants, he/she will figure out - sooner or later - how your CRC works, but it's even harder than create a packet without CRC. (Also, if safety is critical, you will find SSL libs/examples/code on the Net.)

epoll and set multiple interests at once

Interestingly, I cannot find any discussion on this rather than some
old slides from 2004.
IMHO, the current scheme of epoll() usage is begging for something
like epoll_ctlv() call. Although this call does not make sense for
typical HTTP web servers, it does make sense in a game server where
we are sending same data to multiple clients at once. This does not
seem hard to implement given the fact that epoll_ctl() is already there.
Do we have any reason for not having this functionality? Maybe no
optimization window, there?
You would typically only use epoll_ctl() to add and remove sockets from the epoll set as clients connect and disconnect, which doesn't happen very often.
Sending the same data to multiple sockets would rather require a version of send() (or write()) that takes a vector of file descriptors. The reason this hasn't been implemented is probably just because no-one with sufficient interest in it has done so yet (of course, there are lots of subtle issues - what if each destination file descriptor can only successfully write a different number of bytes).

Resources