I dropped in to ask whether if I send, say, two discrete messages with send (linux C/C++) and read it out in a poll(2/3) callback, can it happen that the two writes (packets) will be read out as one, or for each message there will be a separate poll event? Note, I use IOCTL to peek for the size of pending data to be read. So is it the size of always one message or may happen to be the size of more?
Edit: socket type is SOCK_STREAM.
In STREAM sockets (I guess you do not use a DGRAM socket?) the messages may be joined (there are no message boundaries in stream), or a single message may be separated into several parts.
To make the communication reliable, prefix each packet with its length.
Related
The aim is interact with an OpenEthereum server using json-rpc.
The problem is once connected, I need to react only when receving data as the aim is to subscribe to an event so I need the recv() function to be blocking.
But in that case, if I ask to read more in the buffer than what the server sent the request will be blocking.
The OpenEthereum server is separating it s requests with a linefeed \n character but I don t know how this can help.
I know about simply waiting recv() to timeout. But I using C++ and ipc for having a better latency than my competitors on arbitrage. This also means I need to have the fewest number of context switches as possible.
How to effciently read a message whoes length cannot be determined in advance?
Is there a function for determining how many bytes are left to read on a unix domain socket?
No - just keep doing non-blocking reads until one returns EAGAIN or EWOULDBLOCK.
There may be a platform-specific ioctl or fcntl - but you haven't named a platform, and it's neither portable nor necessary.
How to effciently read a message whoes length cannot be determined in advance?
Just do a non-blocking read into a buffer large enough to contain the largest message you might receive.
I need to react only when receving data as the aim is to subscribe to an event so I need the recv() function to be blocking
You're confusing two things.
How to be notified when the socket becomes readable:
by using select or poll to wait until the socket is readable. Just read their manuals, that's their most common use case.
How to read everything available to read without blocking indefinitely:
by doing non-blocking reads until EWOULDBLOCK or EAGAIN is returned.
There is logically a third step, for stream-based protocols like this, which is correctly managing buffers in case of partial messages. Oh, and actually parsing the messages, but I assume you have a JSON library already.
This is entirely normal, basic UNIX I/O design. It is not an exotic optimization.
I am running a system that has 5 threads, all threads send UDP messages to the same IP and PORT concurrently.
How does linux handle this? Is there any risk to receive mixed messages? I am using sendto function to send udp messages.
many thanks
How does linux handle this?
It handles it just fine.
Is there any risk to receive mixed messages?
It's unclear what you mean by 'mixed messages'. As is always the case with UDP, there is no guarantee that the packets will arrive at the destination port in any particular order, and there is no guarantee that they will arrive at all -- but if they do arrive, the data in each packet received will identical to the data in a packet that was previously sent. In particular, you don't have to worry about receiving a packet e.g. that contains half of the data from one packet and half of the data from another packet.
UDP is a means of delivering a single packet unreliably, in that, it makes no guarantee of delivery order or even if the packet is received at all.
If you need to send data reliably and in order, use TCP, that's what it's for. ;)
I'm sending various custom message structures down a nonblocking TCP socket. I want to send either the whole structure in one send() call, or return an error with no bytes sent if there's only room in the send buffer for part of the message (ie send() returns EWOULDBLOCK). If there's not enought room, I will throw away the whole structure and report overflow, but I want to be recoverable after that, ie the receiver only ever receives a sequence of valid complete structures. Is there a way of either checking the send buffer free space, or telling the send() call to do as described? Datagram-based sockets aren't an option, must be connection-based TCP. Thanks.
Linux provides a SIOCOUTQ ioctl() to query how much data is in the TCP output buffer:
http://www.kernel.org/doc/man-pages/online/pages/man7/tcp.7.html
You can use that, plus the value of SO_SNDBUF, to determine whether the outgoing buffer has enough space for any particular message. So strictly speaking, the answer to your question is "yes".
But there are two problems with this approach. First, it is Linux-specific. Second, what are you planning to do when there is not enough space to send your whole message? Loop and call select again? But that will just tell you the socket is ready for writing again, causing you to busy-loop.
For efficiency's sake, you should probably bite the bullet and just deal with partial writes; let the network stack worry about breaking your stream up into packets for optimal throughput.
TCP has no support for transactions; this is something which you must handle on layer 7 (application).
I am trying to create a p2p applications on Linux, which I want to run as efficiently as possible.
The issue I have is with managing packets. As we know, there may be more than one packet in the recv() buffer at any time, so there is a need to have some kind of message framing system to make sure that multiple packets are not treated as one big packet.
So at the moment my packet structure is:
(u16int Packet Length):(Packet Data)
Which requires two calls to recv(); one to get the packet size, and one to get the packet.
There are two main problems with this:
1. A malicious peer could send a packet with a size header of
something large, but not send any more data. The application will
hang on the second recv(), waiting for data that will never come.
2. Assuming that calling Recv() has a noticeable performance penalty
(I actually have no idea, correct me if I am wrong) calling Recv() twice
will slow the program down.
What is the best way to structure packets/Recieving system for both the best efficiency and stability? How do other applications do it? What do you recommend?
Thankyou in advance.
I think your "framing" of messages within a TCP stream is right on.
You could consider putting a "magic cookie" in front of each frame (e.g. write the 32-bit int "0xdeadbeef" at the top of each frame header in addition to the packet length) such that it becomes obvious that your are reading a frame header on the first of each recv() pairs. It the magic integer isn't present at the start of the message, you have gotten out of sync and need to tear the connection down.
Multiple recv() calls will not likely be a performance hit. As a matter of fact, because TCP messages can get segmented, coalesced, and stalled in unpredictable ways, you'll likely need to call recv() in a loop until you get all the data you expected. This includes your two byte header as well as for the larger read of the payload bytes. It's entirely possible you call "recv" with a 2 byte buffer to read the "size" of the message, but only get 1 byte back. (Call recv again, and you'll get the subsequent bytes). What I tell the developers on my team - code your network parsers as if it was possible that recv only delivered 1 byte at a time.
You can use non-blocking sockets and the "select" call to avoid hanging. If the data doesn't arrive within a reasonable amount of time (or more data arrives than expected - such that syncing on the next message becomes impossible), you just tear the connection down.
I'm working on a P2P project of my own. Would love to trade notes. Follow up with me offline if you like.
I disagree with the others, TCP is a reliable protocol, so a packet magic header is useless unless you fear that your client code isn't stable or that unsolicited clients connect to your port number.
Create a buffer for each client and use non-blocking sockets and select/poll/epoll/kqueue. If there is data available from a client, read as much as you can, it doesn't matter if you read more "packets". Then check whether you've read enough so the size field is available, if so, check that you've read the whole packet (or more). If so, process the packet. Then if there's more data, you can repeat this procedure. If there is partial packet left, you can move that to the start of your buffer, or use a circular buffer so you don't have to do those memmove-s.
Client timeout can be handled in your select/... loop.
That's what I would use if you're doing something complex with the received packet data. If all you do is to write the results to a file (in bigger chunks) then sendfile/splice yields better peformance. Just read packet length (could be multiple reads) then use multiple calls to sendfile until you've read the whole packet (keep track of how much left to read).
You can use non-blocking calls to recv() (by setting SOCK_NONBLOCK on the socket), and wait for them to become ready for reading data using select() (with a timeout) in a loop.
Then if a file descriptor is in the "waiting for data" state for too long, you can just close the socket.
TCP is a stream-oriented protocol - it doesn't actually have any concept of packets. So, in addition to recieving multiple application-layer packets in one recv() call, you might also recieve only part of an application-layer packet, with the remainder coming in a future recv() call.
This implies that robust reciever behaviour is obtained by receiving as much data as possible at each recv() call, then buffering that data in an application-layer buffer until you have at least one full application-layer packet. This also avoids your two-calls-to-recv() problem.
To always recieve as much data as possible at each recv(), without blocking, you should use non-blocking sockets and call recv() until it returns -1 with errno set to EWOULDBLOCK.
As others said, a leading magic number (OT: man file) is a good (99.999999%) solution to identify datagram boundaries, and timeout (using non-blocking recv()) is good for detecting missing/late packet.
If you count on attackers, you should put a CRC in your packet. If a professional attacker really wants, he/she will figure out - sooner or later - how your CRC works, but it's even harder than create a packet without CRC. (Also, if safety is critical, you will find SSL libs/examples/code on the Net.)
What is the difference between message queues and a pipe in Linux?
Off the top of my head and assuming you talk about posix message queues (not the SysV ones):
Pipes aren't limited in size, message queues are.
Pipes can be integrated in systems using file descriptors, message queues have their own set of functions, though linux supports select(), poll(), epoll() and friends on the mqd_t.
Pipes, once closed, require some amount of cooperation on both sides to reestablish them, message queues can be closed and reopened on either side without the coorporation of the other side.
Pipes are flat, much like a stream, to impose a message structure you would have to implement a protocol on both sides, message queues are message oriented already, no care has to be taken to get, say, the fifth message in the queue.
They are very different things, really.
The biggest practical difference is that a pipe doesn't have the notion of "messages", it's just a pipe to write() bytes to and read() bytes from. The receiving end must have a way to know what piece of data constitute a "message" in your program, and you must implement that yourself. Furthermore the order of bytes is defined: bytes will come out in the order you put them in. And, generally speaking, it has one input and one output.
A message queue is used to transfer "messages", which have a type and size. So the receiving end can just wait for one "message" with a certain type, and you don't have to worry if this is complete or not. Several processes may send to and receive from the same queue.
see man mq_overview and/or man svipc for more information.