Is there a way to get the SCTP level statistics in Java? I am using com.sun.nio.sctp package. I didn't find information about SCTP statistics in Java. I would be interested to know similar SCTP association level statistics which can be fetched using the below SCTP_GET_ASSOC_STATS API in C.
https://access.redhat.com/solutions/113013
SCTP_GET_ASSOC_STATS
Applications can retrieve current statistics about an association, including
SACKs sent and received, SCTP packets sent and received. The complete list can
be found in /usr/include/netinet/sctp.h in struct sctp_assoc_stats.
Related
I am trying to understand the data collection of SOME/IP in Autosar Adaptive (udpCollection, as described with SomeipCollectionProps). See Specification of Manifest, Autosar, [TPS_MAN_03158]. This function buffers SOME/IP messages in order to transport more than one message in a UDP datagram.
Am I reading the specification correctly that there exists one such buffer per service instance, as opposed to one per socket? Meaning that I cannot use it to bundle messages from several SOME/IP services toghether in one UDP datagram?
I am writing a block device driver and having been copying parts of another one. They use the function blk_mq_rq_to_pdu, but I dont really understand what it does. It seems to just return a void pointer. What exactly is a PDU in Linux?
PDU is not Linux-specific. It's a Protocol Data Unit. From Wikipedia
In telecommunications, a protocol data unit (PDU) is a single unit of information transmitted among peer entities of a computer network. A PDU is composed of protocol-specific control information and user data. In the layered architectures of communication protocol stacks, each layer implements protocols tailored to the specific type or mode of data exchange.
So in device drivers, this is a generic term for whatever units of data are managed by the specific device or protocol.
Suppose I have an IoT device which I'm about to control (lets say switch on/off) and monitor (e.g. collect temperature readings). It seems MQTT could be the right fit. I could publish messages to the device to control it and the device could publish messages to a broker to report temperature readings. So far so good.
The problems start to occur when I try to design the API to control the device.
Lets day the device subscribes to two topics:
/device-id/control/on
/device-id/control/off
Then I publish messages to these topics in some order. But given the fact that messaging is typically an asynchronous process there are no guarantees on the order of messages received by the device.
So in case two messages are published in the following order:
/device-id/control/on
/device-id/control/off
they could be received in the reversed order leaving the device turned on, which can have dramatic consequences, depending on the context.
Of course the API could be designed in some other way, for example there could be just one topic
/device-id/control
and the payload of individual messages would carry the meaning of an individual message (on/off). So in case messages are published to this topic in a given order they are expected to be received in the exact same order on the device.
But what if the order of publishes to individual topics cannot be guaranteed? Suppose the following architecture of a system for IoT devices:
/ control service \
application -> broker -> control service -> broker -> IoT device
\ control service /
The components of the system are:
an application which effectively controls the device by publishing messages to a broker
a typical message broker
a control service with some business logic
The important part is that as in most modern distributed systems the control service is a distributed, multi instance entity capable of processing multiple control messages from the application at a time. Therefore the order of messages published by the application can end up totally mixed when delivered to the IoT device.
Now given the fact that most MQTT brokers only implement QoS0 and QoS1 but no QoS2 it gets even more interesting as such control messages could potentially be delivered multiple times (assuming QoS1 - see https://stackoverflow.com/a/30959058/1776942).
My point is that separate topics for control messages is a bad idea. The same goes for a single topic. In both cases there are no message delivery order guarantees.
The only solution to this particular issue that comes to my mind is message versioning so that old (out-dated) messages could simply be skipped when delivered after another message with more recent version property.
Am I missing something?
Is message versioning the only solution to this problem?
Am I missing something?
Most definitely. The example you brought up is a generic control system, being attached to some message-oriented scheme. There are a number of patterns that can be used when referring to a message-based architecture. This article by Microsoft categorizes message patterns into two primary classes:
Commands and
Events
The most generic pattern of command behavior is to issue a command, then measure the state of the system to verify the command was carried out. If you forget to verify, your system has an open loop. Such open loops are (unfortunately) common in IT systems (because it's easy to forget), and often result in bugs and other bad behaviors such as the one described above. So, the proper way to handle a command is:
Issue the command
Inquire as to the state of the system
Evaluate next action
Events, on the other hand, are simply fired off. As the publisher of an event, it is not my business to worry about who receives the event, in what order, etc. Now, it should also be pointed out that the use of any decent message broker (e.g. RabbitMQ) generally carries strong guarantees that messages will be delivered in the order which they were originally published. Note that this does not mean they will be processed in order.
So, if you treat a command as an event, your system is guaranteed to act up sooner or later.
Is message versioning the only solution to this problem?
Message versioning typically refers to a property of the message class itself, rather than a particular instance of the class. It is often used when multiple versions of a message-based API exist and must be backwards-compatible with one another.
What you are instead referring to is unique message identifiers. Guids are particularly handy for making sure that each message gets its own unique id. However, I would argue that de-duplication in message-based architectures is an anti-pattern. One of the consequences of using messaging is that duplicates are possible, so you should try to design your system behaviors to be stateless and idempotent. If this is not possible, it should be considered that messaging may not be the correct communication solution for the need.
Using the command-event dichotomy as an example, you could perform the following transaction:
The controller issues the command, assigning a unique identifier to the command.
The control system receives the command and turns on.
The control system publishes the "light on" event notification, containing the unique id of the command that was used to turn on the light.
The controller receives the notification and correlates it to the original command.
In the event that the controller doesn't receive notification after some timeout, the controller can retry the command. Note that "light on" is an idempotent command, in that multiple calls to it will have the same effect.
When state changes, send the new state immediately and after that periodically every x seconds. With this solution your systems gets into desired state, after some time, even when it temporarily disconnects from the network (low battery).
BTW: You did not miss anything.
Apart from the comment that most brokers don't support QOS2 (I suspect you mean that a number of broker as a service offerings don't support QOS2, such as Amazon's AWS IoT service) you have covered most of the major points.
If message order really is that important then you will have to include some form of ordering marker in the message payload, be this a counter or timestamp.
In a P2P system, what is a difference between:
send a query message to a known node and the node re-send a response(I mean I explicitly contact a node by sending a message to ask him somethings).
if there is a DHT which contains information about nodes and their resources(each recording contain a key that represent IP # of each node, and a list of its available resources), so if I have an access to this DHT (may be I am a member) and I know the key or the identifier of a given node, first can I look directly at the recording of this node without need to send it a message or a query(I mean I implicitly contact it)?second, if yes how? I mean how the DHT is represented physically, and how a node updates its information?
In case 1. if you are sure the remote node has the resource, then DHT is useless.
In case 2, DHT helps you locate resources. Yes, you can take a look at the DHT record (if you have any) about the remote node. It will give you an indication of whether the resource might be available on that remote node.
Typically, DHT are in memory tables, or table stored on a local small database. There are many ways to push the info to remote nodes, a common way is push the info to random nodes.
I have one doubt regarding multicasting in linux kernel. When multicast data arrives
linux kernel checks MFC and if the matching entry is not found then kernel gives conrol message cache miss and header to the user space. My question is what happens to the data
packet? Suppose i may deliberately not want to keep the entry inside MFC but i may have some
other table which has got forwarding information and i want to use that one then what to do?
Regards,
Bhavin.
If a data packet arrives for which there is no matching MFC entry, then the data packet gets put into a queue. It will stay in that queue until either an MFC entry gets added that matches that packet or a timeout expires (10 seconds), whichever happens first. The queue itself has a limit of 10 entries, and once that limit is reached no more packets will get put onto the queue. In that case, unresolved packets will get dropped.
I don't think Linux supports having multiple MFC tables (but I could be wrong). As an alternative, you could route these multicast packets in userspace using by receiving them on a raw socket and then forwarding them out whatever interface you like. In fact many of the IPv6 multicast routing daemons used a method like this before IPv6 multicast support on Linux matured.
you can check it that if related kernel compiled multicast support using command below
grep -i "multicast" /boot/config-2.6.32-358.6.1.el6.x86_64
/UE