How to get historical sensor data of a registered Hono Device - eclipse-hono

I am trying to send the below sensor values from the registered device in Hono to Ditto in order to form the digital twin of the registered device.
I am using the below command.
curl -X POST -i -u sensor10#tenantAllAdapters:mylittlesecret -H 'Content-Type: application/json' -d '{"temp": 2307, "hum": 40000}' http://localhost:8080/telemetry
HTTP/1.1 202 Accepted
content-length: 0
I am able to receive the data in Ditto. How can I get to know the all the historical values that are send from the device to Ditto over a period of time.

In Eclipse Ditto, you can't get historical data.
Ditto is about representing the current state of the digital twin or for communicating directly with the real device both by applying authorization.
Historical values are not persisted in Ditto.
If you have the need to access historical data (which is completely understandable, very normal use case), you would - for example - add a connection in Ditto to an Apache Kafka which gets all twin change events and from that Kafka you can put the historical data somewhere better suited for persisting and querying such data, e.g. into a time series database like InfluxDB.
That's also how it's done (put the data in an optimized service for historical data) in the commercial solution which builds on Eclipse Ditto from Bosch, the Bosch IoT Suite.

Related

How to enforce the order of messages passed to an IoT device over MQTT via a cloud-based system (API design issue)

Suppose I have an IoT device which I'm about to control (lets say switch on/off) and monitor (e.g. collect temperature readings). It seems MQTT could be the right fit. I could publish messages to the device to control it and the device could publish messages to a broker to report temperature readings. So far so good.
The problems start to occur when I try to design the API to control the device.
Lets day the device subscribes to two topics:
/device-id/control/on
/device-id/control/off
Then I publish messages to these topics in some order. But given the fact that messaging is typically an asynchronous process there are no guarantees on the order of messages received by the device.
So in case two messages are published in the following order:
/device-id/control/on
/device-id/control/off
they could be received in the reversed order leaving the device turned on, which can have dramatic consequences, depending on the context.
Of course the API could be designed in some other way, for example there could be just one topic
/device-id/control
and the payload of individual messages would carry the meaning of an individual message (on/off). So in case messages are published to this topic in a given order they are expected to be received in the exact same order on the device.
But what if the order of publishes to individual topics cannot be guaranteed? Suppose the following architecture of a system for IoT devices:
/ control service \
application -> broker -> control service -> broker -> IoT device
\ control service /
The components of the system are:
an application which effectively controls the device by publishing messages to a broker
a typical message broker
a control service with some business logic
The important part is that as in most modern distributed systems the control service is a distributed, multi instance entity capable of processing multiple control messages from the application at a time. Therefore the order of messages published by the application can end up totally mixed when delivered to the IoT device.
Now given the fact that most MQTT brokers only implement QoS0 and QoS1 but no QoS2 it gets even more interesting as such control messages could potentially be delivered multiple times (assuming QoS1 - see https://stackoverflow.com/a/30959058/1776942).
My point is that separate topics for control messages is a bad idea. The same goes for a single topic. In both cases there are no message delivery order guarantees.
The only solution to this particular issue that comes to my mind is message versioning so that old (out-dated) messages could simply be skipped when delivered after another message with more recent version property.
Am I missing something?
Is message versioning the only solution to this problem?
Am I missing something?
Most definitely. The example you brought up is a generic control system, being attached to some message-oriented scheme. There are a number of patterns that can be used when referring to a message-based architecture. This article by Microsoft categorizes message patterns into two primary classes:
Commands and
Events
The most generic pattern of command behavior is to issue a command, then measure the state of the system to verify the command was carried out. If you forget to verify, your system has an open loop. Such open loops are (unfortunately) common in IT systems (because it's easy to forget), and often result in bugs and other bad behaviors such as the one described above. So, the proper way to handle a command is:
Issue the command
Inquire as to the state of the system
Evaluate next action
Events, on the other hand, are simply fired off. As the publisher of an event, it is not my business to worry about who receives the event, in what order, etc. Now, it should also be pointed out that the use of any decent message broker (e.g. RabbitMQ) generally carries strong guarantees that messages will be delivered in the order which they were originally published. Note that this does not mean they will be processed in order.
So, if you treat a command as an event, your system is guaranteed to act up sooner or later.
Is message versioning the only solution to this problem?
Message versioning typically refers to a property of the message class itself, rather than a particular instance of the class. It is often used when multiple versions of a message-based API exist and must be backwards-compatible with one another.
What you are instead referring to is unique message identifiers. Guids are particularly handy for making sure that each message gets its own unique id. However, I would argue that de-duplication in message-based architectures is an anti-pattern. One of the consequences of using messaging is that duplicates are possible, so you should try to design your system behaviors to be stateless and idempotent. If this is not possible, it should be considered that messaging may not be the correct communication solution for the need.
Using the command-event dichotomy as an example, you could perform the following transaction:
The controller issues the command, assigning a unique identifier to the command.
The control system receives the command and turns on.
The control system publishes the "light on" event notification, containing the unique id of the command that was used to turn on the light.
The controller receives the notification and correlates it to the original command.
In the event that the controller doesn't receive notification after some timeout, the controller can retry the command. Note that "light on" is an idempotent command, in that multiple calls to it will have the same effect.
When state changes, send the new state immediately and after that periodically every x seconds. With this solution your systems gets into desired state, after some time, even when it temporarily disconnects from the network (low battery).
BTW: You did not miss anything.
Apart from the comment that most brokers don't support QOS2 (I suspect you mean that a number of broker as a service offerings don't support QOS2, such as Amazon's AWS IoT service) you have covered most of the major points.
If message order really is that important then you will have to include some form of ordering marker in the message payload, be this a counter or timestamp.

How to stay connected to a Bluetooth LE bathroom scales in Linux

I just got a Bluetooth LE/Smart bathroom scales (Model Sanitas SBF 70). I can read data from it using the following command:
gatttool --device=(btaddr) -I
connect
Then when I stand on it, I get multiple notification messages like this:
"Notification handle = 0x002e value: e7 58 01 05 e9"
where the last two bytes are is the mass in 50g increments.
I'd like to integrate this into a few application using a TCP or UDP socket service that broadcasts these messages to any listening clients.
But after some research I have no idea what's the best way to stay connected all the time (the connection times out after a few minutes). Or alternatively to be able to re-establish a connection when the scales is used (I notice lots of activity from 'hcitool lescan' whenever someone steps on the scales).
I don't care what language / library is used. If I can push this to a TCP/UDP socket it will be trivial for other applications to consume the information.
The answer is straightforward: You don't.
Your scale is most likely battery powered. Therefore the Bluetooth communications will only be enabled for a short period of time after having measured your weight. Your application just needs to try connecting to the scale over and over (catch any "unable to connect timeouts") until you step on it. And when connected get the data from it before BLE is shut down again. In pseudo code:
while true:
while not_connected:
try to connect
receive notifications
disconnect
gatttool wrapped by the python module pygatt is perfectly usable to solve this chalenge.
In my case scale data (preceding 30 weights) is transferred after enabling indications of 3 different characteristics.

Global Platform CRS and card specific commands

I'm trying to read some data from the secure element in the SIM of a global platform 2.2 card.
My SELECT command of the applet is successful 90,00 with some PDOL data in the response. However when I send Get Processing Options it returns 6D00. It seems the Security Domain is still in charge and does not understand the GPO command.
My investigation says applet specific commands needs to go over a secure channel, while the CRS runs on the basic channel. Is this requirement true even if the card is not being accessed over the contactless interface?
First of all verify that your applet must be selected on same I/O interface and the same logical channel on which you are sending the command.
The status word '6D00' shows that the command sent over another applet or SD that does not understand it instead of secure channel initiation requirement.
And yes if you are communicating with secured card like and Secure element then you need to initiate scp session.
SELECT APDU should be sent first with correct AID.
If AID belongs to the EMV card, response should come with status SW 90 00 with data area. Processing Options Data Object List in data area should be properly parsed and GET PROCESSING OPTIONS should be constructed with required parameters (Terminal
Transaction Qualifiers,Amount, Authorized , Unpredictable Number etc.)
Try this TLV utilities and see the options list:
9F38 Processing Options Data Object List (PDOL)
9F66049F02069F37045F2A029A03

Send and receive data in same URB in USB possible? LINUX

I am developing a USB driver in linux kernel space Where my usb interface as two bulk endpoints (IN and OUT).I am using ONE URB to send and receive data. Can i use the same usb_alloc_urb() for sending and receive data.
I am using the below steps to send and receive data using urb
usb_alloc_urb() ---> created only one's
usb_fill_bulk_urb()---> using usb_sndbulkpipe
usb_sumbit_urb() ----> sumbited successfully
usb_fill_bulk_urb()---> using usb_rcvbulkpipe
usb_submit_urb() -----> In this point i am getting ERROR -16.
Is the above followed steps are correct/possible ?
Thank you
You cannot use the same URB for two transfers at the same time.
To be able to reuse a URB, you must wait until it has been completed (successfully or with an error).
To use full-duplex transfers, you need two URBs.
To get high transfer rates, you must pipeline URBs, i.e., you need even more.

possible problems of an sms gateway?

What could be the possible problems of an sms gateway?
If you are trying to create a system having large volume of transactions?
Is data loss rampant? are there any issues about SMS gateway? or there a huge issue about the problems?
I also notice this post is old but hope this helps.
As you haven't mentioned how you are sending the messages i.e. VIA GSM sim, or through a aggregator. I am guessing you are talking about database storage?
The way we do it is store all the messages in a mysql table tbl_sms_queue for instance, this is assigned to an campaign and also has a status frag ENUM(pending or sent).
table sample:
tbl_sms_queue
- pk_message_id INT PK AI
- fk_user_id INT
- fk_campaign_id INT
- fk_sender_name INT
...
- status ENUM('0','1') DEF 0
Then our gearman servers parse through the db, we can sent approx 4500 - 5000 messages per minute.
FYI: I am an architect of an bulk sms platform and our database backend is a custom installation of clustered mysql and a gearman implication.
I don't think "data loss" is a concern. I think the problem you can encounter is that the sender/receiver can only work with 1 message at once.
Sending/receiving a SMS take X seconds, and if you indeed send/receive a lot of these short messages, your queue will grow rapidly and you will soon need to be able to send/receive multiple SMS at once, using more phone lines.

Resources