Linux CAN bus transmission timeout - linux

Scenario
There is a Linux-powered device connected to a CAN bus. The device periodically transmits the CAN message. The nature of the data carried by this message is like measurement rather than command, i.e. only the most recent one is actually valid, and if some messages are lost that is not an issue as long as the latest one was received successfully.
Then the device in question is being disconnected from the CAN bus for some amount of time that is much longer than the interval between subsequent message transmissions. The device logic is still trying to transmit the messages, but since the bus is disconnected the CAN controller is unable to transmit any of them so the messages are being accumulated in the TX queue.
Some time later the CAN bus connection is restored, and all the accumulated messages are being kicked on the bus one by one.
Problem
When the CAN bus connection is restored, undefined amount of outdated messages will be transmitted from the TX queue.
While the CAN bus connection is still not available but TX queue is already full, transmission of some most recent messages (i.e. the only valid messages) will be discarded.
Once the CAN bus connection is restored, there would be short term traffic burst while the TX queue is being flushed. This can alter the Time Triggered Bus Scheduling if one is used (it is in my case).
Question
My application uses SocketCAN driver, so basically the question should be applied to SocketCAN, but other options are considered too if there are any.
I see two possible solutions: define a message transmission timeout (if a message was not transmitted during some predefined amount if time, it will be discarded automatically), or abort transmission of outdated messages manually (though I doubt it is possible at all with socket API).
Since the first option seems to be most real to me, the question is:
How does one define TX timeout for CAN interface under Linux?
Are there other options exist to solve the problems described above, aside from TX timeouts?

My solution for this problem was shutting down and bringing the device up again:
void
clear_device_queue
(void)
{
if (!queue_cleared)
{
const char
*dev = getenv("MOTOR_CAN_DEVICE");
char
cmd[1024];
sprintf(cmd, "sudo ip link set down %s", dev);
system(cmd);
usleep(500000);
sprintf(cmd, "sudo ip link set up %s", dev);
system(cmd);
queue_cleared = true;
}
}

I don't know the internals of SocketCAN, but I think the larger part of the problem should be solved on a more general, logical level.
Before, there is one aspect to clarify:
The question includes tag safety-critical...
If the CAN communication is not relevant to implement a safety function, you can pick any solution you find useful. There may be parts of the second alternative which are useful for you in this case too, but those are not mandatorx.
If the communication is, however used in a safety-relevant context, there must be a concept that takes into account the requirements imposed by IEC 61508 (safety of programmable electronic systems in general) and IEC 61784-x/62280 (safe communcation protocols).
Those standards usually lead to some protocol measures that come in handy with any embedded communication, but especially for the present problem:
Add a sequence counter to the protocol frames.
The receiver shall monitor that it the counter values it sees don't make larger "jumps" than allowed (e.g., if you allow to miss 2 frames along the way, max. counter increment may be +3. CAN bus may redouble a frame, so a counter increment of +0 must be tolerated, too.
The receiver must monitor that every received frame is followed by another within a timeout period. If your CAN connection is lost and recovered in the meantime, it depends if the interruption was longer or within the timeout.
Additionally, the receiver may monitor that a frame doesn't follow the preceding one too early, but if the frames include the right data, this usually isn't necessary.
[...] The nature of the data carried by this message is like measurement rather than command, i.e. only the most recent one is actually valid, and if some messages are lost that is not an issue as long as the latest one was received successfully.
Through CAN, you shall never communicate "commands" in the meaning that every one of them can trigger a change, like "toggle output state" or "increment set value by one unit" because you never know whether the frame reduplication hits you or not.
Besides, you shall never communicate "anything safety-relevant" through a single frame because any frame may be lost or broken by an error. Instead, "commands" shall be transferred (like measurements) as a stream of periodical frames with measurement or set value updates.
Now, in order to get the required availability out of the protocol design, the TX queue shouldn't be long. If you actually feel as you need that queue, it could be that the bus is overloaded, compared to the timing requirements it faces. From my point of view, the TX "queue" shouldn't be longer than one or two frames. Then, the problem of recovering the CAN connection is nearly fixed...

Related

TCP close() vs shutdown() in Linux OS

I know there are already a lot similar questions in stackoverflow, but nothing seems convincing. Basically trying to understand under what circumstances I need to use one over the other or use both.
Also would like to understand if close() & shutdown() with shut_rdwr are the same.
Closing TCP connections has gathered so much confusion that we can rightfully say either this aspect of TCP has been poorly designed, or is lacking somewhere in documentation.
Short answer
To do it the proper way, you should use all 3: shutdown(SHUT_WR), shutdown(SHUT_RD) and close(), in this order. No, shutdown(SHUT_RDWR) and close() are not the same. Read their documentation carefully and questions on SO and articles about it, you need to read more of them for an overview.
Longer answer
The first thing to clarify is what you aim for, when closing a connection. Presumably you use TCP for a higher lever protocol (request-response, steady stream of data etc.). Once you decide to "close" (terminate) connection, all you had to send/receive, you sent and received (otherwise you would not decide to terminate) - so what more do you want? I'm trying to outline what you may want at the time of termination:
to know that all data sent in either direction reached the peer
if there are any errors (in transmitting the data in process of being sent when you decided to terminate, as well as after that, and in doing the termination itself - which also requires data being sent/received), the application is informed
optionally, some applications want to be non-blocking up to and including the termination
Unfortunately TCP doesn't make these features easily available, and the user needs to understand what's under the hood and how the system calls interact with what's under the hood. A key sentence is in the recv manpage:
When a stream socket peer has performed an orderly shutdown, the
return value will be 0 (the traditional "end-of-file" return).
What the manpage means here is, orderly shutdown is done by one end (A) choosing to call shutdown(SHUT_WR), which causes a FIN packet to be sent to the peer (B), and this packet takes the form of a 0 return code from recv inside B. (Note: the FIN packet, being an implementation aspect, is not mentioned by the manpage). The "EOF" as the manpage calls it, means there will be no more transmission from A to B, but application B can, and should continue to send what it was in the process of sending, and even send some more, potentially (A is still receiving). When that sending is done (shortly), B should itself call shutdown(SHUT_WR) to close the other half of the duplex. Now app A receives EOF and all transmission has ceased. The two apps are OK to call shutdown(SHUT_RD) to close their sockets for reading and then close() to free system resources associated with the socket (TODO I haven't found clear documentation taht says the 2 calls to shutdown(SHUT_RD) are sending the ACKs in the termination sequence FIN --> ACK, FIN --> ACK, but this seems logical).
Onwards to our aims, for (1) and (2) basically the application must somehow wait for the shutdown sequence to happen, and observe its outcome. Notice how if we follow the small protocol above, it is clear to both apps that the termination initiator (A) has sent everything to B. This is because B received EOF (and EOF is received only after everything else). A also received EOF, which is issued in reply to its own EOF, so A knows B received everything (there is a caveat here - the termination protocol must have a convention of who initiates the termination - so not both peers do so at once). However, the reverse is not true. After B calls shutdown(SHUT_WR), there is nothing coming back app-level, to tell B that A received all data sent, plus the FIN (the A->B transmission had ceased!). Correct me if I'm wrong, but I believe at this stage B is in state "LAST_ACK" and when the final ACK arrives (step #4 of the 4-way handshake), concludes the close but the application is not informed unless it had set SO_LINGER with a long-enough timeout. SO_LINGER "ON" instructs the shutdown call to block (be performed in the forground) hence the shutdown call itself will do the waiting.
In conclusion what I recommend is to configure SO_LINGER ON with a long timeout, which causes it to block and hence return any errors. What is not entirely clear is whether it is shutdown(SHUT_WR) or shutdown(SHUT_RD) which blocks in expectation of the LAST_ACK, but that is of less importance as we need to call both.
Blocking on shutdown is problematic for requirement #3 above where e.g. you have a single-threaded design that serves all connections. Using SO_LINGER may block all connections on the termination of one of them. I see 3 routes to address the problem:
shutdown with LINGER, from a different thread. This will of course complicate a design
linger in background and either
2A. "Promote" FIN and FIN2 to app-level messages which you can read and hence wait for. This basically moves the problem that TCP was meant to solve, one level higher, which I consider hack-ish, also because the ensuing shutdown calls may still end in a limbo.
2B. Try to find a lower-level facility such as SIOCOUTQ ioctl described here that queries number of unACKed bytes in the network stack. The caveats are many, this is Linux specific and we are not sure if it aplies to FIN ACKs (to know whether closing is fully done), plus you'd need to poll taht periodically, which is complicated. Overall I'm leaning towards option 1.
I tried to write a comprehensive summary of the issue, corrections/additions welcome.
TCP sockets are bidirectional - you send and receive over the one socket. close() stops communication in both directions. shutdown() provides another parameter that allows you to specify which direction you might want to stop using.
Another difference (between close() and shutdown(rw)) is that close() will keep the socket open if another process is using it, while shutdown() shuts down the socket irrespective of other processes.
shutdown() is often used by clients to provide framing - to indicate the end of their request, e.g. an echo service might buffer up what it receives until the client shutdown()s their send side, which tells the server that the client has finished, and the server then replies; the client can receive the reply because it has only shutdown() writing, not reading, through its socket.
Close will close both send and receving end of socket.If you want only sending part of socket should be close not receving part or vice versa you can use shutdown.
close()------->will close both sending and receiving end.
shutdown()------->only want to close sending or receiving.
argument:SHUT_RD(shutdown reading end (receiving end))
SHUT_WR(shutdown writing end(sending end))
SHUT_RDWR(shutdown both)

How to check if message is dropped due to HWM at send in ZeroMQ PUB-SUB pattern

I have implemented a message bus in Linux for IPC using ZeroMQ (more specifically CZMQ). Here is what I have implemented.
My question is, how do I know that send dropped the message when the publisher buffer is full?
In my simple test setup, I am using a publisher-subscriber with a proxy. I have a fast sender and a very slow receiver causing messages to hit HWM and drop on send. My exception is that send would fail with 'message dropped' error, but it is not the case. the zmq_msg_send() is not giving me any error even though the messages get dropped (I can verify this by seeing gaps in messages in subscriber end).
How can I know when the messages get dropped? If this is the intended behaviour and ZeroMQ does not let us know that, what is a workaround to find if my send dropped the message?
What you appear to be asking for is fault tolerance for which PUB/SUB isn't ideal. Not only may the HWM be reached, but consider what happens if a subscribing client dies and gets restarted - it will miss messages sent by the publisher for the duration. FWIW. In ZMQ v2, the default HWM was infinite for PUB/SUB, but got changed to 1000 in v3 because systems were choking for memory due to messages being queued faster than they could be sent. The 1000 seemed like a reasonable value for bursts of messages when the average message rate was within the network bandwidth. YMMV.
If you just want to know when messages get dropped, it's as simple as adding an incrementing message number to the message and having the subscribers monitor that. You could choose to place this number in it's own frame or not; overall simplicity will be the decider. I don't believe it's possible to determine when messages get dropped specifically because the HWM has been reached.
By default zeromq pub/sub from recent versions defaults to a high-water mark ZMQ_SNDHWM/ZMQ_RCVHWM of 1000 messages.
What this means is if you burst in a tight loop more than 1000 messages it will prob drop some. It is simple to write a test and give each message a payload with a sequence number.
One option is to set both the HWMs to 0. This will mean it's infinite.
You can play about with this using some examples I wrote recently:
https://gist.github.com/easytiger/992b3a29eb5c8545d289
https://gist.github.com/easytiger/e382502badab49856357
The will pub and sub on a tport in a burst of messages. If you play with the HWM you can see in big bursts that if it isn't 0 it will drop a great many

TCP Message framing + recv() [linux]: Good conventions?

I am trying to create a p2p applications on Linux, which I want to run as efficiently as possible.
The issue I have is with managing packets. As we know, there may be more than one packet in the recv() buffer at any time, so there is a need to have some kind of message framing system to make sure that multiple packets are not treated as one big packet.
So at the moment my packet structure is:
(u16int Packet Length):(Packet Data)
Which requires two calls to recv(); one to get the packet size, and one to get the packet.
There are two main problems with this:
1. A malicious peer could send a packet with a size header of
something large, but not send any more data. The application will
hang on the second recv(), waiting for data that will never come.
2. Assuming that calling Recv() has a noticeable performance penalty
(I actually have no idea, correct me if I am wrong) calling Recv() twice
will slow the program down.
What is the best way to structure packets/Recieving system for both the best efficiency and stability? How do other applications do it? What do you recommend?
Thankyou in advance.
I think your "framing" of messages within a TCP stream is right on.
You could consider putting a "magic cookie" in front of each frame (e.g. write the 32-bit int "0xdeadbeef" at the top of each frame header in addition to the packet length) such that it becomes obvious that your are reading a frame header on the first of each recv() pairs. It the magic integer isn't present at the start of the message, you have gotten out of sync and need to tear the connection down.
Multiple recv() calls will not likely be a performance hit. As a matter of fact, because TCP messages can get segmented, coalesced, and stalled in unpredictable ways, you'll likely need to call recv() in a loop until you get all the data you expected. This includes your two byte header as well as for the larger read of the payload bytes. It's entirely possible you call "recv" with a 2 byte buffer to read the "size" of the message, but only get 1 byte back. (Call recv again, and you'll get the subsequent bytes). What I tell the developers on my team - code your network parsers as if it was possible that recv only delivered 1 byte at a time.
You can use non-blocking sockets and the "select" call to avoid hanging. If the data doesn't arrive within a reasonable amount of time (or more data arrives than expected - such that syncing on the next message becomes impossible), you just tear the connection down.
I'm working on a P2P project of my own. Would love to trade notes. Follow up with me offline if you like.
I disagree with the others, TCP is a reliable protocol, so a packet magic header is useless unless you fear that your client code isn't stable or that unsolicited clients connect to your port number.
Create a buffer for each client and use non-blocking sockets and select/poll/epoll/kqueue. If there is data available from a client, read as much as you can, it doesn't matter if you read more "packets". Then check whether you've read enough so the size field is available, if so, check that you've read the whole packet (or more). If so, process the packet. Then if there's more data, you can repeat this procedure. If there is partial packet left, you can move that to the start of your buffer, or use a circular buffer so you don't have to do those memmove-s.
Client timeout can be handled in your select/... loop.
That's what I would use if you're doing something complex with the received packet data. If all you do is to write the results to a file (in bigger chunks) then sendfile/splice yields better peformance. Just read packet length (could be multiple reads) then use multiple calls to sendfile until you've read the whole packet (keep track of how much left to read).
You can use non-blocking calls to recv() (by setting SOCK_NONBLOCK on the socket), and wait for them to become ready for reading data using select() (with a timeout) in a loop.
Then if a file descriptor is in the "waiting for data" state for too long, you can just close the socket.
TCP is a stream-oriented protocol - it doesn't actually have any concept of packets. So, in addition to recieving multiple application-layer packets in one recv() call, you might also recieve only part of an application-layer packet, with the remainder coming in a future recv() call.
This implies that robust reciever behaviour is obtained by receiving as much data as possible at each recv() call, then buffering that data in an application-layer buffer until you have at least one full application-layer packet. This also avoids your two-calls-to-recv() problem.
To always recieve as much data as possible at each recv(), without blocking, you should use non-blocking sockets and call recv() until it returns -1 with errno set to EWOULDBLOCK.
As others said, a leading magic number (OT: man file) is a good (99.999999%) solution to identify datagram boundaries, and timeout (using non-blocking recv()) is good for detecting missing/late packet.
If you count on attackers, you should put a CRC in your packet. If a professional attacker really wants, he/she will figure out - sooner or later - how your CRC works, but it's even harder than create a packet without CRC. (Also, if safety is critical, you will find SSL libs/examples/code on the Net.)

realtime midi input and synchronisation with audio

I have built a standalone app version of a project that until now was just a VST/audiounit. I am providing audio support via rtaudio.
I would like to add MIDI support using rtmidi but it's not clear to me how to synchronise the audio and MIDI parts.
In VST/audiounit land, I am used to MIDI events that have a timestamp indicating their offset in samples from the start of the audio block.
rtmidi provides a delta time in seconds since the previous event, but I am not sure how I should grab those events and how I can work out their time in relation to the current sample in the audio thread.
How do plugin hosts do this?
I can understand how events can be sample accurate on playback, but it's not clear how they could be sample accurate when using realtime input.
rtaudio gives me a callback function. I will run at a low block size (32 samples). I guess I will pass a pointer to an rtmidi instance as the userdata part of the callback and then call midiin->getMessage( &message ); inside the audio callback, but I am not sure if this is thread-sensible.
Many thanks for any tips you can give me
In your case, you don't need to worry about it. Your program should send the MIDI events to the plugin with a timestamp of zero as soon as they arrive. I think you have perhaps misunderstood the idea behind what it means to be "sample accurate".
As #Brad noted in his comment to your question, MIDI is indeed very slow. But that's only part of the problem... when you are working in a block-based environment, incoming MIDI events cannot be processed by the plugin until the start of a block. When computers were slower and block sizes of 512 (or god forbid, >1024) were common, this introduced a non-trivial amount of latency which results in the arrangement not sounding as "tight". Therefore sequencers came up with a clever way to get around this problem. Since the MIDI events are already known ahead of time, these events can be sent to the instrument one block early with an offset in sample frames. The plugin then receives these events at the start of the block, and knows not to start actually processing them until N samples have passed. This is what "sample accurate" means in sequencers.
However, if you are dealing with live input from a keyboard or some sort of other MIDI device, there is no way to "schedule" these events. In fact, by the time you receive them, the clock is already ticking! Therefore these events should just be sent to the plugin at the start of the very next block with an offset of 0. Sequencers such as Ableton Live, which allow a plugin to simultaneously receive both pre-sequenced and live events, simply send any live events with an offset of 0 frames.
Since you are using a very small block size, the worst-case scenario is a latency of .7ms, which isn't too bad at all. In the case of rtmidi, the timestamp does not represent an offset which you need to schedule around, but rather the time which the event was captured. But since you only intend to receive live events (you aren't writing a sequencer, are you?), you can simply pass any incoming MIDI to the plugin right away.

winsock 2. thread safety for simultaneous send's. tcp

is it possible to have multiple threads sending on the same socket? will there be interleaving of the streams or will the socket block on the first thread (assuming tcp)? the majority of opinions i've found seems to warn against doing this for obvious fears of interleaving, but i've also found a few comments that state the opposite. are interleaving fears a carryover from winsock1 and are they well-founded for winsock2? is there a way to setup a winsock2 socket that would allow for lack of local synchronization?
two of the contrary opinions below... who's right?
comment 1
"Winsock 2 implementations should be completely thread safe. Simultaneous reads / writes on different threads should succeed, or fail with WSAEINPROGRESS, depending on the setting of the overlapped flag when the socket is created. Anyway by default, overlapped sockets are created; so you don't have to worry about it. Make sure you don't use NT SP6, if ur on SP6a, you should be ok !"
source
comment 2
"The same DLL doesn't get accessed by multiple processes as of the introduction of Windows 95. Each process gets its own copy of the writable data segment for the DLL. The "all processes share" model was the old Win16 model, which is luckily quite dead and buried by now ;-)"
source
looking forward to your comments!
jim
~edit1~
to clarify what i mean by interleaving. thread 1 sends the msg "Hello" thread 2 sends the msg "world!". recipient receives: "Hwoel lorld!". this assumes both messages were NOT sent in a while loop. is this possible?
I'd really advice against doing this in any case. The send functions might send less than you tell it to for various very legit reasons, and if another thread might enter and try to also send something, you're just messing up your data.
Now, you can certainly write to a socket from several threads, but you've no longer any control over what gets on the wire unless you've proper locking at the application level.
consider sending some data:
WSASend(sock,buf,buflen,&sent,0,0,0:
the sent parameter will hold the no. of bytes actually sent - similar to the return value of the send()function. To send all the data in buf you will have to loop doing a WSASend until all all the data actually get sent.
If, say, the first WSASend sends all but the last 4 bytes, another thread might go and send something while you loop back and try to send the last 4 bytes.
With proper locking to ensure that can't happen, it should e no problem sending from several threads - I wouldn't do it anyway just for the pure hell it will be to debug when something does go wrong.
is it possible to have multiple threads sending on the same socket?
Yes - although, depending on implementation this can be more or less visible. First, I'll clarify where I am coming from:
C# / .Net 3.5
System.Net.Sockets.Socket
The overall visibility (i.e. required management) of threading and the headaches incurred will be directly dependent on how the socket is implemented (synchronously or asynchronously). If you go the synchronous route then you have a lot of work to manually manage connecting, sending, and receiving over multiple threads. I highly recommend that this implementation be avoided. The efforts to correctly and efficiently perform the synchronous methods in a threaded model simply are not worth the comparable efforts to implement the asynchronous methods.
I have implemented an asynchronous Tcp server in less time than it took for me to implement the threaded synchronous version. Async is much easier to debug - and if you are intent on Tcp (my favorite choice) then you really have few worries in lost messages, missing data, or whatever.
will there be interleaving of the streams or will the socket block on the first thread (assuming tcp)?
I had to research interleaved streams (from wiki) to ensure that I was accurate in my understanding of what you are asking. To further understand interleaving and mixed messages, refer to these links on wiki:
Real Time Messaging Protocol
Transmission Control Protocol
Specifically, the power of Tcp is best described in the following section:
Due to network congestion, traffic load balancing, or other unpredictable network behavior, IP packets can be
lost, duplicated, or delivered out of order. TCP detects these problems, requests retransmission of lost
packets, rearranges out-of-order packets, and even helps minimize network congestion to reduce the
occurrence of the other problems. Once the TCP receiver has finally reassembled a perfect copy of the data
originally transmitted, it passes that datagram to the application program. Thus, TCP abstracts the application's
communication from the underlying networking details.
What this means is that interleaved messages will be re-ordered into their respective messages as sent by the sender. It is expected that threading is or would be involved in developing a performance-driven Tcp client/server mechanism - whether through async or sync methods.
In order to keep a socket from blocking, you can set it's Blocking property to false.
I hope this gives you some good information to work with. Heck, I even learned a little bit...

Resources