Sending UDP messages from different threads, C language - linux

I am running a system that has 5 threads, all threads send UDP messages to the same IP and PORT concurrently.
How does linux handle this? Is there any risk to receive mixed messages? I am using sendto function to send udp messages.
many thanks

How does linux handle this?
It handles it just fine.
Is there any risk to receive mixed messages?
It's unclear what you mean by 'mixed messages'. As is always the case with UDP, there is no guarantee that the packets will arrive at the destination port in any particular order, and there is no guarantee that they will arrive at all -- but if they do arrive, the data in each packet received will identical to the data in a packet that was previously sent. In particular, you don't have to worry about receiving a packet e.g. that contains half of the data from one packet and half of the data from another packet.

UDP is a means of delivering a single packet unreliably, in that, it makes no guarantee of delivery order or even if the packet is received at all.
If you need to send data reliably and in order, use TCP, that's what it's for. ;)

Related

NodeJs - TCP/IP Socket send/receive serially?

The TCP server I am hitting (trying to use the built in node TLS Socket) expects a handshaking process of send/receives in a certain order (send, on receive of success, send more messages, on success, send more, etc). The receive messages does not have anything to let me know which send it is responding to, so I am not able to easily use the streaming nature of the built in TCP Node library.
Any ideas of what the best way to handle this case in Node?
example (python), and this is example of the process:
s.send("hello")
s.send("send this 1")
reply = s.recv()
message = reply[0]
if message == OK:
print('Got OK for hello')
s.send("send this 2")
reply = s.recv()
message = reply[0]
if message == OK:
print('Got it')
else:
raise Exception('Failed to send hello')
When you have non-blocking I/O and you want to do something such as send data, read specific response from that send you need to set up some appropriate state so that when the next set of data come in, you know exactly what it belongs to and therefore you know what to do with it.
There are a number of ways to do that I can think of:
Create a general purpose state machine where you send data and read data and whenever you read data, you can tell what state the socket is in and therefore what you are supposed to do with the data you read.
Create a temporal set of listeners where you send data, then add a temporal listener (you can use .once()) for incoming data that is specially designed to process it the way you are expecting this response to be. When the data arrives, you make sure that listener is removed.
Your pseudo-code example does not show enough info for anyone to make a more concrete suggestion. TCP, by its very nature is stream driven. It doesn't have any built-in sense of a message or a packet. So, what you show doesn't even show the most basic level of any TCP protocol which is how to know when you've received an entire response.
Even your reply = s.recv() shown in some other language isn't practical in TCP (no matter the language) because s.recv() needs to know when it's got a complete message/chunk/whatever it is that you're waiting to receive. TCP delivers data in order sent, but does not have any sense of a particular packet of information that goes together. You have to supply that on top of the TCP layer. Common techniques used for delineating messages are :
Some message delimiter (like a carriage return or line feed or a zero byte or some other tag - all of which are known not to occur inside the message itself)
Sending a length first so the reader knows exactly how many bytes to read.
Wrapping messages in some sort of container where the start and end of the container are made clear by the structure of the container (note options 1 and 2 above are just specific implementations of such a container). For example, the webSocket protocol uses a very specific container model that includes some length data and other info.
I was thinking of showing you an example using socket.once('data', ...) to listen for the specific response, but even that won't work properly without knowing how to delineate an incoming message so one knows when you've received a complete incoming message.
So, one of your first steps would be to implement a layer on top of TCP that reads data and knows how to break it into discrete messages (knows both when a complete message has arrived and how to break up multiple messages that might be arriving) and then emits your own event on the socket when a whole message has arrived. Then, and only then, can you start to implement the rest of your state machine using the above techniques.

does linux socket poll handle discrete messages?

I dropped in to ask whether if I send, say, two discrete messages with send (linux C/C++) and read it out in a poll(2/3) callback, can it happen that the two writes (packets) will be read out as one, or for each message there will be a separate poll event? Note, I use IOCTL to peek for the size of pending data to be read. So is it the size of always one message or may happen to be the size of more?
Edit: socket type is SOCK_STREAM.
In STREAM sockets (I guess you do not use a DGRAM socket?) the messages may be joined (there are no message boundaries in stream), or a single message may be separated into several parts.
To make the communication reliable, prefix each packet with its length.

Linux: send whole message or none of it on TCP socket

I'm sending various custom message structures down a nonblocking TCP socket. I want to send either the whole structure in one send() call, or return an error with no bytes sent if there's only room in the send buffer for part of the message (ie send() returns EWOULDBLOCK). If there's not enought room, I will throw away the whole structure and report overflow, but I want to be recoverable after that, ie the receiver only ever receives a sequence of valid complete structures. Is there a way of either checking the send buffer free space, or telling the send() call to do as described? Datagram-based sockets aren't an option, must be connection-based TCP. Thanks.
Linux provides a SIOCOUTQ ioctl() to query how much data is in the TCP output buffer:
http://www.kernel.org/doc/man-pages/online/pages/man7/tcp.7.html
You can use that, plus the value of SO_SNDBUF, to determine whether the outgoing buffer has enough space for any particular message. So strictly speaking, the answer to your question is "yes".
But there are two problems with this approach. First, it is Linux-specific. Second, what are you planning to do when there is not enough space to send your whole message? Loop and call select again? But that will just tell you the socket is ready for writing again, causing you to busy-loop.
For efficiency's sake, you should probably bite the bullet and just deal with partial writes; let the network stack worry about breaking your stream up into packets for optimal throughput.
TCP has no support for transactions; this is something which you must handle on layer 7 (application).

TCP Message framing + recv() [linux]: Good conventions?

I am trying to create a p2p applications on Linux, which I want to run as efficiently as possible.
The issue I have is with managing packets. As we know, there may be more than one packet in the recv() buffer at any time, so there is a need to have some kind of message framing system to make sure that multiple packets are not treated as one big packet.
So at the moment my packet structure is:
(u16int Packet Length):(Packet Data)
Which requires two calls to recv(); one to get the packet size, and one to get the packet.
There are two main problems with this:
1. A malicious peer could send a packet with a size header of
something large, but not send any more data. The application will
hang on the second recv(), waiting for data that will never come.
2. Assuming that calling Recv() has a noticeable performance penalty
(I actually have no idea, correct me if I am wrong) calling Recv() twice
will slow the program down.
What is the best way to structure packets/Recieving system for both the best efficiency and stability? How do other applications do it? What do you recommend?
Thankyou in advance.
I think your "framing" of messages within a TCP stream is right on.
You could consider putting a "magic cookie" in front of each frame (e.g. write the 32-bit int "0xdeadbeef" at the top of each frame header in addition to the packet length) such that it becomes obvious that your are reading a frame header on the first of each recv() pairs. It the magic integer isn't present at the start of the message, you have gotten out of sync and need to tear the connection down.
Multiple recv() calls will not likely be a performance hit. As a matter of fact, because TCP messages can get segmented, coalesced, and stalled in unpredictable ways, you'll likely need to call recv() in a loop until you get all the data you expected. This includes your two byte header as well as for the larger read of the payload bytes. It's entirely possible you call "recv" with a 2 byte buffer to read the "size" of the message, but only get 1 byte back. (Call recv again, and you'll get the subsequent bytes). What I tell the developers on my team - code your network parsers as if it was possible that recv only delivered 1 byte at a time.
You can use non-blocking sockets and the "select" call to avoid hanging. If the data doesn't arrive within a reasonable amount of time (or more data arrives than expected - such that syncing on the next message becomes impossible), you just tear the connection down.
I'm working on a P2P project of my own. Would love to trade notes. Follow up with me offline if you like.
I disagree with the others, TCP is a reliable protocol, so a packet magic header is useless unless you fear that your client code isn't stable or that unsolicited clients connect to your port number.
Create a buffer for each client and use non-blocking sockets and select/poll/epoll/kqueue. If there is data available from a client, read as much as you can, it doesn't matter if you read more "packets". Then check whether you've read enough so the size field is available, if so, check that you've read the whole packet (or more). If so, process the packet. Then if there's more data, you can repeat this procedure. If there is partial packet left, you can move that to the start of your buffer, or use a circular buffer so you don't have to do those memmove-s.
Client timeout can be handled in your select/... loop.
That's what I would use if you're doing something complex with the received packet data. If all you do is to write the results to a file (in bigger chunks) then sendfile/splice yields better peformance. Just read packet length (could be multiple reads) then use multiple calls to sendfile until you've read the whole packet (keep track of how much left to read).
You can use non-blocking calls to recv() (by setting SOCK_NONBLOCK on the socket), and wait for them to become ready for reading data using select() (with a timeout) in a loop.
Then if a file descriptor is in the "waiting for data" state for too long, you can just close the socket.
TCP is a stream-oriented protocol - it doesn't actually have any concept of packets. So, in addition to recieving multiple application-layer packets in one recv() call, you might also recieve only part of an application-layer packet, with the remainder coming in a future recv() call.
This implies that robust reciever behaviour is obtained by receiving as much data as possible at each recv() call, then buffering that data in an application-layer buffer until you have at least one full application-layer packet. This also avoids your two-calls-to-recv() problem.
To always recieve as much data as possible at each recv(), without blocking, you should use non-blocking sockets and call recv() until it returns -1 with errno set to EWOULDBLOCK.
As others said, a leading magic number (OT: man file) is a good (99.999999%) solution to identify datagram boundaries, and timeout (using non-blocking recv()) is good for detecting missing/late packet.
If you count on attackers, you should put a CRC in your packet. If a professional attacker really wants, he/she will figure out - sooner or later - how your CRC works, but it's even harder than create a packet without CRC. (Also, if safety is critical, you will find SSL libs/examples/code on the Net.)

winsock 2. thread safety for simultaneous send's. tcp

is it possible to have multiple threads sending on the same socket? will there be interleaving of the streams or will the socket block on the first thread (assuming tcp)? the majority of opinions i've found seems to warn against doing this for obvious fears of interleaving, but i've also found a few comments that state the opposite. are interleaving fears a carryover from winsock1 and are they well-founded for winsock2? is there a way to setup a winsock2 socket that would allow for lack of local synchronization?
two of the contrary opinions below... who's right?
comment 1
"Winsock 2 implementations should be completely thread safe. Simultaneous reads / writes on different threads should succeed, or fail with WSAEINPROGRESS, depending on the setting of the overlapped flag when the socket is created. Anyway by default, overlapped sockets are created; so you don't have to worry about it. Make sure you don't use NT SP6, if ur on SP6a, you should be ok !"
source
comment 2
"The same DLL doesn't get accessed by multiple processes as of the introduction of Windows 95. Each process gets its own copy of the writable data segment for the DLL. The "all processes share" model was the old Win16 model, which is luckily quite dead and buried by now ;-)"
source
looking forward to your comments!
jim
~edit1~
to clarify what i mean by interleaving. thread 1 sends the msg "Hello" thread 2 sends the msg "world!". recipient receives: "Hwoel lorld!". this assumes both messages were NOT sent in a while loop. is this possible?
I'd really advice against doing this in any case. The send functions might send less than you tell it to for various very legit reasons, and if another thread might enter and try to also send something, you're just messing up your data.
Now, you can certainly write to a socket from several threads, but you've no longer any control over what gets on the wire unless you've proper locking at the application level.
consider sending some data:
WSASend(sock,buf,buflen,&sent,0,0,0:
the sent parameter will hold the no. of bytes actually sent - similar to the return value of the send()function. To send all the data in buf you will have to loop doing a WSASend until all all the data actually get sent.
If, say, the first WSASend sends all but the last 4 bytes, another thread might go and send something while you loop back and try to send the last 4 bytes.
With proper locking to ensure that can't happen, it should e no problem sending from several threads - I wouldn't do it anyway just for the pure hell it will be to debug when something does go wrong.
is it possible to have multiple threads sending on the same socket?
Yes - although, depending on implementation this can be more or less visible. First, I'll clarify where I am coming from:
C# / .Net 3.5
System.Net.Sockets.Socket
The overall visibility (i.e. required management) of threading and the headaches incurred will be directly dependent on how the socket is implemented (synchronously or asynchronously). If you go the synchronous route then you have a lot of work to manually manage connecting, sending, and receiving over multiple threads. I highly recommend that this implementation be avoided. The efforts to correctly and efficiently perform the synchronous methods in a threaded model simply are not worth the comparable efforts to implement the asynchronous methods.
I have implemented an asynchronous Tcp server in less time than it took for me to implement the threaded synchronous version. Async is much easier to debug - and if you are intent on Tcp (my favorite choice) then you really have few worries in lost messages, missing data, or whatever.
will there be interleaving of the streams or will the socket block on the first thread (assuming tcp)?
I had to research interleaved streams (from wiki) to ensure that I was accurate in my understanding of what you are asking. To further understand interleaving and mixed messages, refer to these links on wiki:
Real Time Messaging Protocol
Transmission Control Protocol
Specifically, the power of Tcp is best described in the following section:
Due to network congestion, traffic load balancing, or other unpredictable network behavior, IP packets can be
lost, duplicated, or delivered out of order. TCP detects these problems, requests retransmission of lost
packets, rearranges out-of-order packets, and even helps minimize network congestion to reduce the
occurrence of the other problems. Once the TCP receiver has finally reassembled a perfect copy of the data
originally transmitted, it passes that datagram to the application program. Thus, TCP abstracts the application's
communication from the underlying networking details.
What this means is that interleaved messages will be re-ordered into their respective messages as sent by the sender. It is expected that threading is or would be involved in developing a performance-driven Tcp client/server mechanism - whether through async or sync methods.
In order to keep a socket from blocking, you can set it's Blocking property to false.
I hope this gives you some good information to work with. Heck, I even learned a little bit...

Resources