Python TCP packets getting mixed - python-3.x

I have a multiplayer game written in python and uses TCP, So when I send two packets at the same time they get mixed up example if I send "Hello there" and "man" the client receives "hello thereman".
What should I do to prevent them from getting mixed?

That's the way TCP works. It is a byte stream. It is not message-based.
Consider if you write "Hello there" and "man" to a file. If you read the file, you see "hello thereman". A socket works the same way.
If you want to make sense of the byte stream, you need other information. For example, add line feeds to the stream to indicate end of line. For a binary file, include data structures such as "2-byte length (big-endian) followed by <length> bytes of data" so you can read the stream and break it into decipherable messages.
Note that socket methods send() and recv() must have their return values checked. recv(1024) for example can return '' (socket closed) or 1-1024 bytes of data. The size is a maximum to be returned. send() can send less than requested and you'll have to re-send the part that didn't send (or use sendall() in the first place).
Or, use a framework that does all this for you...

Related

Incomplete Output from connection.recv buffer [duplicate]

Some sources say that recv should have the max length possible of a message, like recv(1024):
message0 = str(client.recv(1024).decode('utf-8'))
But other sources say that it should have the total bytes of the receiving message. If the message is "hello":
message0 = str(client.recv(5).decode('utf-8'))
What is the correct way of using recv()?
Some sources say ... But other sources say ... message ...
Both sources are wrong.
The argument for the recv is the maximum number of bytes one want to read at once.
With an UDP socket this is the message size one want to read or larger, but a single recv will only return a single message anyway. If the given size is smaller than the message it will be pruned and the rest will be discarded.
With a TCP socket (the case you ask about) there is no concept of a message in the first place since TCP is a byte stream only. recv will simply return the number of bytes available for read, up to the given size. Specifically a single recv in a TCP receiver does not not need to match a single send in the sender. It might match and it often will match if the amount of data is small, but there is no guarantee and one should never rely on it.
... message0 = str(client.recv(5).decode('utf-8'))
Note that calling decode('utf-8') directly on the data returned by recv is a bad idea. One first need to be sure that all the expected data are read and only then call decode('utf-8'). If only part of the data are read the end of the read data could be in the middle of a character, since a single character in UTF-8 might be encoded in multiple bytes (everything except ASCII characters). If decode('utf-8') is called with an incomplete encoded character it will throw an exception and your code will break.

netperf socket size vs buffer set for send/recv calls?

While I was trying to implement benchmark testware using netperf I happened to read its manual. Where I got this query
In the TCP_STREAM specific test there are an option to mention -s and -S to specify local(netperf client), remote(netperf server) socket buffer sizes respectively. Is that a regular BSD socket size? There is also an option to specify the local send message size -m and remote receive message size -M; Is this the total message size after all TCP/IP encapsulation? Can anybody throw some light on this. It would be great if you can illustrate using a use-case why we need these separate parameters as the BSD socket size appears to be the upper boundary here.
The socket buffer sizes (set via -s and -S) will control how much data may be outstanding on the connection at one time by affecting either the receiver's advertised window (which will be based on the SO_SNDBUF) or how much data the sender can hold waiting for ACKnowledgement (which will be based on SO_SNDBUF).
The send and receive message sizes (-m and -M) control how much data is presented in any one "send" (-m) or requested in any one "recv" (-M) call.
As TCP is a streaming protocol, it is perfectly legal/possible to make a send call with a number of bytes larger than the socket buffer(s). When the socket is blocking (as netperf uses) it simply means the send call will remain there until the last of its bytes have been put into the send socket buffer. On the receive side, one can as for more than a socket buffer's worth of data in a single receive, but the semantics are such that the call will return with however many bytes happen to be there at the time if there are any, and will return with however many bytes arrive if the socket buffer was empty at the time of the call (again because netperf uses blocking sockets/calls).

What is BitTorrent peer (Deluge) saying?

I'm writing a small app to test out how torrent p2p works and I created a sample torrent and am seeding it from my Deluge client. From my app I'm trying to connect to Deluge and download the file.
The torrent in question is a single-file torrent (file is called A - without any extension), and its data is the ASCII string Test.
Referring to this I was able to submit the initial handshake and also get a valid response back.
Immediately afterwards Deluge is sending even more data. From the 5th byte it would seem like it is a bitfield message, but I'm not sure what to make of it. I read that torrent clients may send a mixture of Bitfield and Have messages to show which parts of the torrent they possess. (My client isn't sending any bitfield, since it is assuming not to have any part of the file in question).
If my understanding is correct, it's stating that the message size is 2: one for identifier + payload. If that's the case why is it sending so much more data, and what's that supposed to be?
Same thing happens after my app sends an interested command. Deluge responds with a 1-byte message of unchoke (but then again appends more data).
And finally when it actually submits the piece, I'm not sure what to make of the data. The first underlined byte is 84 which corresponds to the letter T, as expected, but I cannot make much more sense of the rest of the data.
Note that the link in question does not really specify how the clients should supply messages in order once the initial handshake is completed. I just assumed to send interested and request based on what seemed to make sense to me, but I might be completely off.
I don't think Deluge is sending the additional bytes you're seeing.
If you look at them, you'll notice that all of the extra bytes are bytes that already existed in the handshake message, which should have been the longest message you received so far.
I think you're reading new messages into the same buffer, without zeroing it out or anything, so you're seeing bytes from earlier messages again, following the bytes of the latest message you read.
Consider checking if the network API you're using has a way to check the number of bytes actually received, and only look at that slice of the buffer, not the entire thing.

Serial port programming - Recognize end of received data

I am writing a serial port application using VC++, in which I can open a port on a switch device, send some commands and display their output. I am running a thread which always read open port for output of given command. My main thread waits until read completes, but problem is how do I recognize that command output ends, and I should signal main thread.
Almost any serial port communication requires a protocol. Some way for the receiver to discover that a response has been received in full. A very simple one is using a unique byte or character that can never appear in the rest of the data. A linefeed is standard, used by any modem for example.
This needs to get more elaborate when you need to transfer arbitrary binary data. A common solution for that is to send the length of the response first. The receiver can then count down the received bytes to know when it is complete. This often needs to be embellished with a specific start byte value so that the receiver has some chance to re-synchronize with the transmitter. And often includes a checksum or CRC so that the receiver can detect transmission errors. Further embellishments then is to make errors recoverable with ACK/NAK responses from the receiver. You'd be then well on your way in re-inventing TCP. The RATP protocol in RFC-916 is a good example, albeit widely ignored.

Determine if there is Data left on the socket and discard it

I'm writing an Interface under Linux which gets Data from a TCP socket. The user provides a Buffer in which the received Data is stored. If the provided Buffer is to small I just want to return an Error.
First problem is to determine if the Buffer was to small. The recv() function just returns me the amount of bytes actually written into the Buffer. If I use the MSG_TRUNC flag stated on the recv() manpage it still returns me the same.
Second problem is to discard the data still queued in the socket. So if I would determine that my provided Buffer was to small I just want to erase everything which is left on the socket. Is there any other ways to do so except Closing and opening the socket again or just receive until nothing is left?
Best Regards
Toby
As documented in the man page, MSG_TRUNC is only valid for packet sockets (e.g. UDP) so this will not work as you want for your TCP socket which is stream based. There are literally hundreds of posts on stackoverflow and elsewhere that talk about preserving application message boundaries on TCP (hint: you need to do this yourself, TCP is a byte stream interface and doesn't) so I won't go into the details here, suffice it to say, you need a mechanism to know how big an application "message" or "packet" is on the recv() side to enable you to do what you want over TCP (or you need to switch over to UDP).
For TCP, if you need to "drain" the socket, reading until there is no data left would work, however, again, you need to consider message boundaries as mentioned above so that you do not read through one "message" and start eating into the next (again, most important point to remember is that TCP provides a byte stream interface and will not necessarily preserve your concept of application level packets or messages).

Resources