Verify routing via PDUR - autosar

In order to verify if a message is received in the COM layer, we can add a Ipdu callout for the Pdu/Signal and wait for the breakpoint to be hit while debugging.
This is not the case for Pdu routing.
If a message is routed via the PduR , it never goes to the Com Layer.
Hence there is no possibility to verify if the message is received by the device(i.e PduR has no callout functionality).
Is there a way where we can verify if the message is received by PduR, and is successfully copied to a Tx Pdu to be sent out(i.e Verify successful gatewaying)?

Keep in mind, that PduR can sometimes have multiple destinations, we have such ECUs, that are routing messages e.g. locally to Com and at the same time, route them to transmit on a different network.
The PduR is triggered by RxIndications and TxConfirmations (and their Tp-interface counterparts).
So, for a normal routing relationship, you should hook on RxIndication for a RxPdu, and could e.g. wait for a TxConfirmation of the TxPdu, which tells, that the TxPdu was transmitted.
Keep in mind, that:
a RxPdu could be queued, which means, they will maybe not directly be transmitted. This might be handy in case of streaming Pdus like XCP, in order to keep the ordering of the PDUs if they can currently not be transmitted.
Routing Paths might be enabled/disabled at runtime, e.g. system conditions handled by BswM Rules and ActionLists calling PduR_[Enable|Disable]Routing(<routingpathgroupId>)

Related

How to enforce the order of messages passed to an IoT device over MQTT via a cloud-based system (API design issue)

Suppose I have an IoT device which I'm about to control (lets say switch on/off) and monitor (e.g. collect temperature readings). It seems MQTT could be the right fit. I could publish messages to the device to control it and the device could publish messages to a broker to report temperature readings. So far so good.
The problems start to occur when I try to design the API to control the device.
Lets day the device subscribes to two topics:
/device-id/control/on
/device-id/control/off
Then I publish messages to these topics in some order. But given the fact that messaging is typically an asynchronous process there are no guarantees on the order of messages received by the device.
So in case two messages are published in the following order:
/device-id/control/on
/device-id/control/off
they could be received in the reversed order leaving the device turned on, which can have dramatic consequences, depending on the context.
Of course the API could be designed in some other way, for example there could be just one topic
/device-id/control
and the payload of individual messages would carry the meaning of an individual message (on/off). So in case messages are published to this topic in a given order they are expected to be received in the exact same order on the device.
But what if the order of publishes to individual topics cannot be guaranteed? Suppose the following architecture of a system for IoT devices:
/ control service \
application -> broker -> control service -> broker -> IoT device
\ control service /
The components of the system are:
an application which effectively controls the device by publishing messages to a broker
a typical message broker
a control service with some business logic
The important part is that as in most modern distributed systems the control service is a distributed, multi instance entity capable of processing multiple control messages from the application at a time. Therefore the order of messages published by the application can end up totally mixed when delivered to the IoT device.
Now given the fact that most MQTT brokers only implement QoS0 and QoS1 but no QoS2 it gets even more interesting as such control messages could potentially be delivered multiple times (assuming QoS1 - see https://stackoverflow.com/a/30959058/1776942).
My point is that separate topics for control messages is a bad idea. The same goes for a single topic. In both cases there are no message delivery order guarantees.
The only solution to this particular issue that comes to my mind is message versioning so that old (out-dated) messages could simply be skipped when delivered after another message with more recent version property.
Am I missing something?
Is message versioning the only solution to this problem?
Am I missing something?
Most definitely. The example you brought up is a generic control system, being attached to some message-oriented scheme. There are a number of patterns that can be used when referring to a message-based architecture. This article by Microsoft categorizes message patterns into two primary classes:
Commands and
Events
The most generic pattern of command behavior is to issue a command, then measure the state of the system to verify the command was carried out. If you forget to verify, your system has an open loop. Such open loops are (unfortunately) common in IT systems (because it's easy to forget), and often result in bugs and other bad behaviors such as the one described above. So, the proper way to handle a command is:
Issue the command
Inquire as to the state of the system
Evaluate next action
Events, on the other hand, are simply fired off. As the publisher of an event, it is not my business to worry about who receives the event, in what order, etc. Now, it should also be pointed out that the use of any decent message broker (e.g. RabbitMQ) generally carries strong guarantees that messages will be delivered in the order which they were originally published. Note that this does not mean they will be processed in order.
So, if you treat a command as an event, your system is guaranteed to act up sooner or later.
Is message versioning the only solution to this problem?
Message versioning typically refers to a property of the message class itself, rather than a particular instance of the class. It is often used when multiple versions of a message-based API exist and must be backwards-compatible with one another.
What you are instead referring to is unique message identifiers. Guids are particularly handy for making sure that each message gets its own unique id. However, I would argue that de-duplication in message-based architectures is an anti-pattern. One of the consequences of using messaging is that duplicates are possible, so you should try to design your system behaviors to be stateless and idempotent. If this is not possible, it should be considered that messaging may not be the correct communication solution for the need.
Using the command-event dichotomy as an example, you could perform the following transaction:
The controller issues the command, assigning a unique identifier to the command.
The control system receives the command and turns on.
The control system publishes the "light on" event notification, containing the unique id of the command that was used to turn on the light.
The controller receives the notification and correlates it to the original command.
In the event that the controller doesn't receive notification after some timeout, the controller can retry the command. Note that "light on" is an idempotent command, in that multiple calls to it will have the same effect.
When state changes, send the new state immediately and after that periodically every x seconds. With this solution your systems gets into desired state, after some time, even when it temporarily disconnects from the network (low battery).
BTW: You did not miss anything.
Apart from the comment that most brokers don't support QOS2 (I suspect you mean that a number of broker as a service offerings don't support QOS2, such as Amazon's AWS IoT service) you have covered most of the major points.
If message order really is that important then you will have to include some form of ordering marker in the message payload, be this a counter or timestamp.

How can we Securely Handle liveness checking messages in IKEv2 with notify payload INVALID_IKE_SPI

This is a question hitting my mind but can not come up with solution.
Suppose there is a IKE tunnel between two peers (peer_1,peer_2). Now there is an attacker who wants to break this tunnel. What the attacker is doing is that for every keep alive Informational Request from peer_1 to peer_2, he/she(attacker) replies back with INVALID_IKE_SPI notify payload and obviously this message would be in plain text. This results peer_1 believing the IKE_SA got some problem and after implementation specific retry the peer_1 closes the tunnel(Although rfc 7296 specifies that peer receiving such reply should not change its state but there should be an end of retrying keep alive to get rid of network flood). As a result the attacker wins.
Is there anything IKEv2 Protocol itself says to prevent this type of situation?
If anyone knows about this please reply me back or some solution will be also helpful.
Citing RFC 7296, section 2.4, paragraph 3:
Since IKE is designed to operate in spite of DoS attacks from the
network, an endpoint MUST NOT conclude that the other endpoint has
failed based on any routing information (e.g., ICMP messages) or IKE
messages that arrive without cryptographic protection (e.g., Notify
messages complaining about unknown SPIs). An endpoint MUST conclude
that the other endpoint has failed only when repeated attempts to
contact it have gone unanswered for a timeout period or when a
cryptographically protected INITIAL_CONTACT notification is received
on a different IKE SA to the same authenticated identity. An
endpoint should suspect that the other endpoint has failed based on
routing information and initiate a request to see whether the other
endpoint is alive. To check whether the other side is alive, IKE
specifies an empty INFORMATIONAL request that (like all IKE requests)
requires an acknowledgement (note that within the context of an IKE
SA, an "empty" message consists of an IKE header followed by an
Encrypted payload that contains no payloads). If a cryptographically
protected (fresh, i.e., not retransmitted) message has been received
from the other side recently, unprotected Notify messages MAY be
ignored. Implementations MUST limit the rate at which they take
actions based on unprotected messages.
I think that (for the sake of clarity) the relevant types of an attacker should be considered:
1/ An attacker able to drop arbitrary packets (i.e. an active MitM)
this one is able to perform DOS just by dropping packets and AFAIK there is nothing that can prevent him doing so. He does not need any sophistication to break the communication.
2/ An attacker unable to drop packets
this one can not prevent peer_2's legitimate responses (to peer_1's INFORMATIONAL requests) reaching peer_1.
thus peer_1 receives the response (before all retries timeout) and knows that peer_2 is alive.
3/ An attacker able to drop some packets
then it is a race and the outcome depends on the configuration of the peers and the percentage of packets the attacker is able to drop.
EDIT>
I would understand the questioned "case 2 attacker" scenario this way:
by receiving the attacker's unprotected INVALID_IKE_SPI notify (spoofed by the attacker from peer_2's address) peer_1 can (at most) only suspect that peer_2 has failed (as it MUST not conclude that the other endpoint has failed based on IKE massages without cryptographic protection)
it may decide (see note below) to issue a liveness check by sending an empty INFORMATIONAL request to peer_2 (which is cryptographically protected)
the "case 2 atacker" is unable to tamper with this request, so it should reach peer_2 (it might involve some implementation specific retransmits, as specified)
peer_2 (as it is alive) responds with an acknowledgement (which is cryptographically protected)
the "case 2 atacker" is unable to tamper with this response, so it should reach peer_1
upon receiving this response (which is a fresh, cryptographically protected message from peer_2), peer_1 knows that peer_2 is alive and keeps the SAs (as nothing has happened)
Note: The "Implementations MUST limit the rate at which they take actions based on unprotected messages" part means, that peer_1 should not perform this liveness check on every unprotected Notify message received and some implementation specific rate limiting mechanism must be in place (probably to prevent traffic amplification).
Desclaimer: I am no crypto expert, so please do validate my thoughts.

How/Where to store temporary data without using claim check pattern?

I have a usecase that requires our application to send a notification to an external system in case when a particular event occurs. The notification to external system happens by putting a message into a JMS queue.
The transactional requirements are not that strict. Hence, instead of using JTA for such a trivial usecase I decided to use JMS local transaction, as spring understands how to synchronize JMS local transaction with any managed transaction(e.g. database transaction) to elevate 1PC.
The problem I am facing is that the notification has to be enriched with some data before sending the notification. This extra information has no relevance to my business domain which is responsible for generating the event. So, I am not sure where to temporary store that extra data to reclaim it before sending the notification. Probably, below illustration may help in understanding the problem.
HTTP Request ---> Rest API ---> Application Domain ---> Event Generation ---> Notification
As per the above illustration I do not want to pass that extra data and pollute my domain layer, which is part of Rest API request payload, to send the notification.
One solution I thought of is to use thread scoped queue channel to reclaim it before sending the notification. This way Rest API will initiate the process by putting the extra data into the queue and before sending the notification I will pull it from the queue to enrich the message for notification.
The part which I am unable to achieve in this solution is that how to pull the message from queue when I receive the event somewhere in the application (between event generation and notification phase).
If my approach does not make any sense than please suggest any solution without using claim/check pattern.
Why not simply store the information in a header (or headers)? The domain layer doesn't need to know it's there.
Or, for your solution, create a new QueueChannel for each request, and store a reference to it in a header and receive() from it on the back end, but it's easier to just use a header directly.

Distributed pub/sub with single consumer per message type

I have no clue if it's better to ask this here, or over on Programmers.SE, so if I have this wrong, please migrate.
First, a bit about what I'm trying to implement. I have a node.js application that takes messages from one source (a socket.io client), and then does processing on the message, which might result in zero or more messages back out, either to the sender, or other clients within that group.
For the processing, I would like to essentially just shove the message into a queue, then it works its way through various message processors that might kick off their own items, and eventually, the bit running socket.io is informed "Hey, send this message back"
As a concrete example, say a user signs into the service, that sign in message is then placed in the queue, where the authorization processor gets it, does it's thing, then places a message back in the queue saying the client's been authorized. This goes back to the socket.io socket that is connected to the client, along with other clients that might be interested. It can also go to other subsystems that might want to do more processing on authorization (looking up user info, sending more info to the client based on their data, etc).
If I wanted strong coupling, this would be easy, but I tried that before, and it just goes to a mess of spaghetti code that's very fragile, and I would like to avoid that. Another wrench in the setup is this should be cluster-able, which is where the real problem comes in. There might be more than one, say, authorization processor running. But the authorization message should be processed only once.
So, in short, I'm looking for a pattern/technique that will allow me to, essentially, have multiple "groups" of subscribers for a message, and the message will be processed only once per group.
I thought about maybe having each instance of a processor generate a unique name that would be used as a list in Reids. This name would then be registered with some sort of dispatch handler, and placed into a set for that group of subscribers. Then when a message arrives, the dispatch pulls a random member out of that set, and places it into that list. While it seems like this would work, it seems somewhat over-complicated and fragile.
The core problem is I've never designed a system like this, so I'm not even sure the proper terms to use or look up. If anyone can point me in the right direction for this, I would be most appreciative.
I think what your describing is similar to https://www.getbridge.com/ service. I it but ended up writing my own based on zeromq, it allows you to register services, req -> <- rec and channels which are pub / sub workers.
As for the design, I used a client -> broker -> services & channels which are all plug and play using auto discovery, you have the services register their schema with the brokers who open a tcp connection so that brokers on other servers can communicate with that broker groups services. Then internal services and clients connect via unix sockets or ipc channels which ever is preferred.
I ended up wrapping around the redis publish/subscribe functions a bit to do this. Each type of message processor gets a "group name", and there can be multiple instances of the processor within that group (so multiple instances of the program can run for clustering).
When publishing a message, I generate an incremental ID, then store the message in a string key with that ID, then publish the message ID.
On the receiving end, the first thing the subscriber does is attempt to add the message ID it just got from the publisher into a set of received messages for that group with sadd. If sadd returns 0, the message has already been grabbed by another instance, and it just returns. If it returns 1, the full message is pulled out of the string key and sent to the listener.
Of course, this relies on redis being single threaded, which I imagine will continue to be the case.
What you might be looking for is an AMQP protocol implementation,where you can have queue get custom exchanges,and implement a pub-sub model.
RabbitMQ - a popular amqp protocol implementation with lots of libraries
it also has node.js library

How to avoid flooding a message queue?

I'm working on an application that is divided in a thin client and a server part, communicating over TCP. We frequently let the server make asynchronous calls (notifications) to the client to report state changes. This avoids that the server loses too much time waiting for an acknowledgement of the client. More importantly, it avoids deadlocks.
Such deadlocks can happen as follows. Suppose the server would send the state-changed-notification synchronously (please note that this is a somewhat constructed example). When the client handles the notification, the client needs to synchronously ask the server for information. However, the server cannot respond, because he is waiting for an answer to his question.
Now, this deadlock is avoided by sending the notification asynchronously, but this introduces another problem. When asynchronous calls are made more rapidly than they can be processed, the call queue keeps growing. If this situation is maintained long enough, the call queue will get totally full (flooded with messages). My question is: what can be done when that happens?
My problem can be summarized as follows. Do I really have to choose between sending notifications without blocking at the risk of flooding the message queue, or blocking when sending notifications at the risk of introducing a deadlock? Is there some trick to avoid flooding the message queue?
Note: To repeat, the server does not stall when sending notifications. They are sent asynchronously.
Note: In my example I used two communicating processes, but the same problem exists with two communicating threads.
If the server is sending informational messages to the client, which you yourself say are asynchronous, it should not have to wait for a reply from the client. If they are not informational, in other words they require an answer, I would say a server should never send such messages to a client, and their presence indicates a poor design.
If you have a constant congestion problem, there is little you can do other than gracefully fail and notify the client that no new messages can be posted; then it is up to the client to maintain a backlog of messages to be posted.
Introducing a priority queue and using message expiration/filtering could allow you to free up space in the queue, but that really just postpones the problem. If possible, you could also aggregate messages or ignore duplicate messages, but again the problem does not seem to be the queue itself. (Not to mention that the more complex queue logic could eat up valuable resources that would be better used actually processing messages.)
Depending on what the server side does, you could introduce result hashing for long computations, offload some types of messages to a dedicated device, check if the server waits unreasonably long for I/O operations, and a myriad of other techniques. Profile if possible, at least try to find out which message(s) causes congestion.
Oh, and the business solution: Compare cost of estimated development time to the cost of better hardware and conclude that you should just buy a more powerful server (or an additional one).
Depending on how important these messages are you might want to look into Message Expiration, or perhaps a Message Filter, though it sounds like your architecture may be incorrect.
I would rather fix the logic in the server side. The message queue should not stall waiting for the answer. Rather have a state machine which can also receive those info queries while it is waiting for the answer from the client.
Of course you can still flood your message queue, but with TCP you can handle it pretty easily.
The best way, I believe, would be to add another state to your client. This I borrowed from the SMPP protocol specs.
Add a congestion state to the client, whereby it always checks the queue length, assuming this is possible, and therefore once a certain threshold is attained, say 1000 unprocessed messages, the client sends the server a message indicating that it's congested and the server will be required to cease all messaging until it receives a notification indicating that the client is no longer congested.
Alternatively, on the server side, if there is a certain number of pending replies, the server could simply cease sending messages until the client replies a certain number of them.
These thresholds can be dynamically calculated or fixed, depending.....

Resources