I'm working on a project involving a linux embedded device with CAN bus-support.
I've noticed that if I try to send a CAN-packet without having anything attached to the CAN-bus, the transmit is automatically reattempted by the kernel an unlimited number of times. I can verify this using a scope - the same message is automatically transmitted over and over. This retransmission persists even if I shut down the process which created the message, and even if this process only ever attempts to transmit one single message.
My question is - is this normal behaviour for a linux CAN bus kernel? My worry is that if there is ever something wrong in the device, and it erroneously concludes that it is alone on the bus, the device might possibly swamp the bus making it unusable for other bus participants. I would have expected there to be some sort of retry-limit.
The device is using linux 4.14.48, and the can-chip is Philips SJA1000.
What you are seeing is likely error frames. Compliant behavior is this:
Node is active. It attempts to send a data frame but get no ACK bits set since nobody is listening to it.
It will send out an error frame, which pretty much only consists of 6 dominant bits to purposely break bit stuffing.
The controller will re-attempt to send the message. If a new attempt to send without receiving ACK is done, another error frame will be sent. This will keep repeating automatically.
After 128 errors, the node will go error passive, where it will still send error frames, but now with recessive level where it doesn't disrupt other traffic.
After a total of 256 errors, the node will go bus off and shut up completely.
This should all be handled by the CAN controller hardware, not by the OS. You might need to reset or power cycle the SJA1000 once it goes bus off. If it never goes bus off, then something in the driver code might be continuously resetting the CAN controller after a certain amount of errors.
Mind that microcontroller implementations might act the same and reset upon errors too, since that's typically the only way to re-establish communication after a bus off. This depends on the nature of the CAN application.
Short answer is yes - if ACK is the only TEM error the counter will stop at 128 and not go into BUS OFF. It will go forever. This happened to me as well and I just turned off the re-transmit function from the processor side. Not sure if that is a CAN standard function or not.
Related
I'm working on an application where I need to ensure that even if the network goes down, messages will still arrive at their destination reliably, in-order, and unmodified. I've been using TCP, and up until now, I was just using a strategy of:
If a send/receive fails, do it again until no error.
If the remote disconnects, wait until the next connection and replace the socket I was send/receiving from with this new one (achieved through some threading and blocking to ensure it's swapped cleanly).
I recently realised that this doesn't work, as send can't report errors indicating that the remote hasn't received the message (cite eg. here).
I did also learn that TCP connections can survive brief network outages, as the kernel buffers the packets until the connection is declared dead after the timeout period (cite.
here).
The question: Is it a feasible strategy to just crank the timeout period waaaay higher on both client/server side (using setsockopt and the SO_KEEPALIVE options), so that a connection "never times out"? I'd have to handle errors related to the kernel's buffer filling up, but that should be relatively simple.
Are there any other failure cases?
If both ends doesn't explicitly disconnect, the tcp connection will stay open forever even if you unplug the cable. There is no timeout in TCP.
However, I would use (or design) an application protocol on top of tcp, making it possible to resume data transmission after re-connects. You may use HTTP for example.
That would be much more stable because depending on buffers would, as you say, at some time exhaust the buffers but the buffers would also being lost on let's say a power outage.
So I have this API endpoint called www.example.com/endpoint on which many devices post(I work in an IOT firm). We have implemented our whole backed in NodeJS and are stuck while scaling from 1 device to 'n' number of devices. The devices post their packets at this API endpoint, from where I execute a complex bit of code(arnd 1000 lines) and save the state of the device in the database(mongoDB). Now the issue is. Whenever I receive a packet from device 1 and I am executing it and in the middle I get a packet from device 2, NodeJS leaves the device 1 execution as it is and starts serving the packet 2 from device 2, I saw this when I put extensive console.log() statements
Now in an ideal world. I would want Node to save the context of my current progress with packet 1. then leave. and go on to save the packet 2 in a queue to be processed later. Once I am done with packet 1 I shall take up packet 2 and process it.
I know libraries like RabbitMQ and kue for storing it in queue and processing it later, but how do I context switch from one execution to another?
This is my way of thinking. There could be other solutions as well. Would like to hear your thoughts on the matter.
Q: How to implement concurrency or context-switching in NodeJS.
A: Short answer: Not possible. Because Javascript is single threaded.
Q: Now the issue is. Whenever I receive a packet from device 1 and I am executing it and in the middle I get a packet from device 2, NodeJS leaves the device 1 execution as it is and starts serving the packet 2 from device 2, I saw this when I put extensive console.log() statements
A: As you might have already read in numerous places that NodeJS is based on an event-driven model that is non-blocking for I/O.
The reason why Node seems to have ditched device1 midway to serve device2 was because the code for device1 has already been processed up till a point where it is just waiting on an asynchronous function to callback. E.g. performing a database write. So meantime while it is available, it went on to service device2
Similar case for device2 - once it hits an async function where an event gets pushed into the event queue, pending for a return. Node might go back to device1 if a response has come back. Or it could be other devices, deviceN.
We say NodeJS is non-blocking because the node process does not lock the entire web application down for a sole response. Instead it move on and pick the next event (essentially a block of code) from the queue to run it. Hence it is constantly busy, unless there is really nothing available on the event queue.
Q: I know libraries like RabbitMQ and kue for storing it in queue and processing it later, but how do I context switch from one execution to another?
A:
As said earlier. as of 2016 - it is still not possible for Javascript to do threading. NodeJS is not designed for heavy computation work, it should only be focused on serving requests therefore the code should preferably be light and non-blocking. Basically you will want to leave those heavy I/O duties like writing to file or databases or making HTTP requests (network) to other processes by wrapping the calls with async functions.
NodeJS is not a silver bullet technology. If your application is expected to do a lot of computational work on the event thread then Node is probably not a good choice of technology but it is not the end of the world - as you can fork your own child process for the heavy computational jobs.
See:
https://nodejs.org/api/child_process.html
You might also want to consider alternative like Java which has NIO and Threading capabilities.
I'm working on a server architecture for sending/receiving messages from remote embedded devices, which will be hosted on Windows Azure. The front-facing servers are going to be maintaining persistent TCP connections with these devices, and I need a way to communicate with them on the backend.
Problem facts:
Devices: ~10,000
Frequency of messages device is sending up to servers: 1/min
Frequency of messages originating server side (e.g. from user actions, scheduled triggers, etc.): 100/day
Average size of message payload: 64 bytes
Upward communication
The devices send up messages very frequently (sensor readings). The constraints for that data are not very strong, due to the fact that we can aggregate/insert those sensor readings in a batched manner, and that they don't require in-order guarantees. I think the best way of handling them is to put them in a Storage Queue, and have a worker process poll the queue at intervals and dump that data. Of course, I'll have to be careful about making sure the worker process does this frequently enough so that the queue doesn't infinitely back up. The max batch size of Azure Storage Queues is 32, but I'm thinking of potentially pulling in more than that: something like publishing to the data store every 1,000 readings or 30 seconds, whichever comes first.
Downward communication
The server sends down updates and notifications much less frequently. This is a slightly harder problem, as I can see two viable paradigms here (with some blending in between). Could either:
Create a Service Bus Queue for each device (or one queue with thousands of subscriptions - limit is for number of queues is 10,000)
Have a state table housed in a DB that contains the latest "state" of a specific message type that the devices will get sent to them
With option 1, the application server simply enqueues a message in a fire-and-forget manner. On the front-end servers, however, there's quite a bit of things that have to happen. Concerns I can see include:
Monitoring 10k queues (or many subscriptions off of a queue - the
Azure SDK apparently reuses connections for subscriptions to the same
queue)
Connection Management
Should no longer monitor a queue if device disconnects.
Need to expire messages if device is disconnected for an extended period of time (so that queue isn't backed up)
Need to enable some type of "refresh" mechanism to update device's complete state when it goes back online
The good news is that service bus queues are durable, and with sessions can arrange messages to come in a FIFO manner.
With option 2, the DB would house a table that would maintain state for all of the devices. This table would be checked periodically by the front-facing servers (every few seconds or so) for state changes written to it by the application server. The front-facing servers would then dispatch to the devices. This removes the requirement for queueing of FIFO, the reasoning being that this message contains the latest state, and doesn't have to compete with other messages destined for the same device. The message is ephemeral: if it fails, then it will be resent when the device reconnects and requests to be refreshed, or at the next check interval of the front-facing server.
In this scenario, the need for queues seems to be removed, but the DB becomes the bottleneck here, and I fear it's not as scalable.
These are both viable approaches, and I feel this question is already becoming too large (although I can provide more descriptions if necessary). Just wanted to get a feel for what's possible, what's usually done, if there's something fundamental I'm missing, and what things in the cloud can I take advantage of to not reinvent the wheel.
If you can identify the device (may be device id/IMEI/Mac address) by the the message it sends then you can reduce the number of queues from 10,000 to 1 queue and not have 10000 subscriptions too. This could also help you in the downward communication as you will be able to identify the device and send the message to the appropriate socket.
As you mentioned the connections last longer you could deliver the command to the device that is connected and decide what to do with the commands to the device that are not connected.
Hope it helps
I'm surely missing something about how the whole MQTT protocol works, as I can't grasp the usage pattern of Last Will Testament messages: what's their purpose?
One example I often see is about informing that a device has gone offline. It doesn't make very much sense to me, since it's obvious that if a device isn't publishing any data it may be offline or there could be some network problems.
So, what are some practical usages of the LWT? What was it invented for?
LWT messages are not really concerned about detecting whether a client has gone offline or not (that task is handled by keepAlive messages).
LWT messages are about what happens after the client has gone offline.
The analogy is that of a real last will:
If a person dies, she can formulate a testament, in which she declares what actions should be taken after she has passed away. An executor will heed those wishes and execute them on her behalf.
The analogy in the MQTT world is that a client can formulate a testament, in which it declares what message should be sent on it's behalf by the broker, after it has gone offline.
A fictitious example:
I have a sensor, which sends crucial data, but very infrequently.
It has formulated a last will statement in the form of [topic: '/node/gone-offline', message: ':id'], with :id being a unique id for the sensor. I also have a emergency-subscriber for the topic 'node/gone-offline', which will send a SMS to my phone every time a message is published on that channel.
During normal operation, the sensor will keep the connection to the MQTT-broker open by sending periodic keepAlive messages interspersed with the actual sensor readings. If the sensor goes offline, the connection to the broker will time out, due to the lack of keepAlives.
This is where LWT comes in: If no LWT is specified, the broker doesn't care and just closes the connection. In our case however, the broker will execute the sensor's last will and publish the LWT-message '/node/gone-offline: :id'. The message will then be consumed to my emergency-subscriber and I will be notified of the sensor's ID via SMS so that I can check up on what's going on.
In short:
Instead of just closing the connection after a client has gone offline, LWT messages can be leveraged to define a message to be published by the broker on behalf of the client, since the client is offline and cannot publish anymore.
Just because a device is not publishing does not mean it is not online or there is a network problem.
Take for example a sensor that monitors a value that only changes very infrequently, good design says that the sensor should only publish the changes to help reduce bandwidth usage as periodically publishing the same value is wasteful. If the value is published as a retained value then any new subscriber will always get the current value without having to wait for the sensor value to change and it publish again.
In this case the LWT is used to published when the sensor fails (or there is a network problem) so we know of the problem as soon at the client keep alive times out.
A in-depth article about Last-Will-and-Testament messages is available in the MQTT Essentials Blog Post series: http://www.hivemq.com/mqtt-essentials-part-9-last-will-and-testament/.
To summarize the blog post:
The Last Will and Testament feature is used in MQTT to notify other clients about an ungracefully disconnected client.
MQTT is often used in scenarios were unreliable networks are very common. Therefore it is assumed that some clients will disconnect ungracefully from time to time, because they lost the connection, the battery is empty or any other imaginable case. It would be good to know if a connected client has disconnected gracefully (which means with a MQTT DISCONNECT message) or not, in order to take appropriate action.
I'm trying to get an HTTP server I'm writing on to behave well when under heavy load, but I'm getting some weird behavior that I cannot quite understand.
My testing consists of using ab (the Apache benchmark program) over the loopback interface at a concurrency level of 1000 (ab -n 50000 -c 1000 http://localhost:8080/apa), while straceing the server process. Strace both slows processing down well enough for the problem to be readily reproducible and allows me to debug the server internals post completion to some extent. I also capture the network traffic with tcpdump while the test is running.
What happens is that ab stops running a while into the test, complaining that a connection returned ECONNRESET, which I find a bit weird. I could easily buy into a connection timing out since the server might simply not have the bandwidth to process them all, but shouldn't that reasonably return ETIMEDOUT or even ECONNREFUSED if not all connections can be accepted?
I used Wireshark to extract the packets constituting the first connection to return ECONNRESET, and its brief packet list looks like this:
(The entire tcpdump file of this connection is available here.)
As you can see from this dump, the connection is accepted (after a few SYN retransmissions), and then the request is retransmitted a few times, and then the server resets the connection. I'm wondering, what could cause this to happen? Normally, Linux' TCP implementation ACKs data before the reading process even chooses to receive it so long as their is space in the TCP window, so why doesn't it do that here? Are there some kind of shared buffers that are running out? Most importantly, why is the kernel responding with a RST packet all of a sudden instead of simply waiting and letting the client re-transmit further?
For the record, the strace of the process indicates that it never even accepts a connection from the port in this connection (port 56946), so this seems to be something Linux does on its own. It is also worth noting that the server works perfectly well as long as ab's concurrency level is low enough (it works perfectly well up to about 100, and then starts failing intermittently somewhere between 100-500), and that its request throughput is rather constant regardless of the concurrency level (it processes somewhere between 6000-7000 requests per second as long as it isn't being straced). I have not found any particular correlation between the frequency of the problem occurring and my backlog setting to listen() (I'm currently using 128, but I've tried up to 1024 without it seeming to make a difference).
In case it matters, I'm running Linux 3.2.0 on this AMD64 box.
The backlog queue filled up: hence the SYN retransmissions.
Then a slot became available: hence the SYN/ACK.
Then the GET was sent, followed by four retransmissions, which I can't account for.
Then the server gave up and reset the connection.
I suspect you have a concurrency or throughput problem in your server which is preventing you from accepting connections rapidly enough. You should have a thread that is dedicated to doing nothing else but calling accept() and either starting another thread to handle the accepted socket or else queueing a job to handle it to a thread pool. I would then speculate that Linux resets connections on connections which are in the backlog queue and which are receiving I/O retries, but that's only a guess.