Defining the "enforcement" in the MQTT connection - eclipse-ditto

I am trying to publish a payload to a MQTT topic defined in the MQTT connection. However, I get this error in the enforcement log: -
Ran into a failure when enforcing incoming signal: The configured filters could not be matched against the given target with ID 'mqttTestTopic'. Either modify the configured filter or ensure that the message is sent via the correct ID. ...
What is required: -
"enforcement": {
"input": "{{ source:address }}",
"filters": [
"'"${TTN_APP_ID}"'/devices/{{ thing:name }}/up"
]
}
What I have tried: -
"enforcement": {
"input": "mqttTestTopic",
"filters": [
"mqttTestTopic/org.eclipse.ditto.testing.demo:digital-twin"
]
}
I am confused about what must be defined in the input and filters. Can I get more clarification?

If you don't need the Source enforcement, you can simply leave that configuration away.
You would only need to configure it, if you want to e.g. ensure that a device may only update its "twin" (or thing in Ditto) via a specific MQTT topic, e.g. containing the device/thing ID or name.
That would add an additional security mechanism, that a device A is prohibited from updating the thing of a device B.
For MQTT 3.1.1, the "input" can only have the value "{{ source:address }}" (for MQTT 5, also "{{ header:<header-name> }}" can be used) and the complete MQTT topic is then matched against the configured array of "filters".
The message is only accepted/processed if the MQTT topic matched the filter - which can make use of placeholders like {{ thing:id }} like documented.

Related

Tom-Select (or Selectize.js) Remote data for Option Groups

I am trying to use remote data for tom-select (https://tom-select.js.org/examples/optgroups/). I am at a loss how to configure option groups with remote data. I have the select loading with remote data like this:
"optgroup": "1 Materials | 1.2 Gravel",
"value": 65,
"label": "1.2.1 Tanks"
From the docs I got the impression that you set optgroupField: 'optgroup' and the option groups would be set automatically. Do I need to add the optgroups array to my JSON data? I can't seem to find any examples of remote data with option groups anywhere.
tom-select shares much of it's code from Selectize.js so I am cross tagging this also.
I found a solution in the selectize world:
https://github.com/selectize/selectize.js/issues/151#issuecomment-111056161
I added a group id:
"optgroup_id": 13,
"optgroup": "1 Materials | 1.1 Pipe, valves & fittings",
"value": 5,
"label": "1.1.1 Line Pipe"
Reset the group field: optgroupField: 'optgroup_id' then added this after the json callback in load:
json.items.forEach((item) => {
this.addOptionGroup(item['optgroup_id'], { label: item['optgroup'] } );
});
this.refreshItems()
I am also playing around with adding a second optgroups json array with just the groups to avoid cycling through all the options.
I hope there is a better answer - will leave this open for that. Hoping this helps someone else.

Avro schema returned as bytes when using azure event hub source connector

We are now using the confluent 6.1.0 and try to connect to Azure even Hub. We can see the data coming but when come to schema, it always shows bytes. Can anyone give us some idea about how to solve it?
{
"name": "eventhub_to_kafka_pullt12",
"config": {
"confluent. topic. bootstrap. servers": "broker:29092",
"connector .class": "io.confluent.connect.azure.eventhubs.EventHubsSourceConnector",
" kafka . topic": "evenhubt5",
"tasks. max": "1",
"max. events": "10",
"azure. eventhubs .sas .keyname": "xxx",
"azure.eventhubs.sas.key": "xxx",
"azure.eventhubs.namespace": "xxx",
"azure.eventhubs.hub.name": "xxx",
"offsets.topic.replication.factor": "1",
"confluent.license.topic.replication.factor":"1",
"transaction.state.log.replication.factor": "1",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter.schema.registry.url": "http://xxxx:8081",
"value.converter":"io.confluent.connect.avro.AvroConverter",
"transaction.state.log.replication.factor": "1"
}
}
We are using azure-eventhub-connector:1.2.1 and the error:
Since EventHubs is Kafka-like, there is no information within the record keys or values that can be decoded to form a schema (compared to a database table, for example). Therefore, you need to use ByteArrayConverter or StringConverter to consume into another Kafka cluster as-is. Using the AvroConverter will force byte/string Connect schemas into that Avro type.
If the original producer used the Schema Registry when producing the data into EventHubs, then downstream (after using Connect) consumers (such as Control Center) can still use AvroDeserializer with the same Schema Registry address as the bytes would be unchanged (the original schema would remain intact, and not "wrapped" within a bytes schema structure).

Connecting Eclipse Hono to Ditto - "description":"Check if all required JSON fields were set."},"status":400}" Error

I was successfully able to connect Hono to Ditto using AMQP adapters and I got the following messages in the log. The value sent from the demo device registered in Hono is successfully received and updated in the Ditto thing.
connectivity_1_ad306c4c315b | 2019-07-08 21:12:05,434 INFO [ID:AMQP_NO_PREFIX:TelemetrySenderImpl-35] o.e.d.s.c.m.a.AmqpPublisherActor akka://ditto-cluster/system/sharding/connection/1/Insight-connection-1/pa/$a/c1/amqpPublisherActor2 - Response dropped, missing replyTo address: UnmodifiableExternalMessage [headers={orig_adapter=hono-http, device_id=4716, correlation-id=ID:AMQP_NO_PREFIX:TelemetrySenderImpl-35, content-type=application/vnd.eclipse.ditto+json, etag="hash:18694a24", orig_address=/telemetry, source=nginx:ditto}, response=true, error=false, authorizationContext=null, topicPath=ImmutableTopicPath [namespace=org.eclipse.ditto, id=4716, group=things, channel=twin, criterion=commands, action=modify, subject=null, path=org.eclipse.ditto/4716/things/twin/commands/modify], enforcement=null, headerMapping=null, sourceAddress=null, payloadType=TEXT, textPayload={"topic":"org.eclipse.ditto/4716/things/twin/commands/modify","headers":{"orig_adapter":"hono-http","device_id":"4716","correlation-id":"ID:AMQP_NO_PREFIX:TelemetrySenderImpl-35","content-type":"application/vnd.eclipse.ditto+json","etag":"\"hash:18694a24\"","orig_address":"/telemetry","source":"nginx:ditto"},"path":"/features","value":null,"status":204}, bytePayload=null']
things-search_1_8f2ad3dda4bf | 2019-07-08 21:12:05,593 INFO [] o.e.d.s.t.p.w.s.EnforcementFlow - Updating search index of <1> things
things-search_1_8f2ad3dda4bf | 2019-07-08 21:12:05,598 INFO [] o.e.d.s.t.p.w.s.EnforcementFlow - Got SudoRetrieveThingResponse <1> times
things-search_1_8f2ad3dda4bf | 2019-07-08 21:12:05,725 INFO [] a.s.Materializer akka.stream.Log(akka://ditto-cluster/user/thingsSearchRoot/searchUpdaterRoot/StreamSupervisor-21) - [SearchUpdaterStream/BulkWriteResult] Element: BulkWriteResult[matched=1,upserts=0,inserted=0,modified=1,deleted=0]
But when I tried to make a new connection (Hono - installed in a different server and ditto hosted in same server where the above successful connection is made). Connection is established and also when I try to send the messages from the demo devices registered in Hono to Ditto. I get the following response.
vigkam#srvgal89:~$ curl -X POST -i -u sensor0101#tenantAdapters:mylittle -H 'Content-Type: application/json' -d '{"temp": 23.09, "hum": 45.85}' http://srvgal89.deri.ie:8080/telemetry
HTTP/1.1 202 Accepted
content-length: 0
And when I try to retrieve connection metrices, I can see the increase in the metric count with respect to the number of messages sent from Hono.
But the only problem is the sensor values (temp and Humidity as in the above curl command) are not getting updated in the ditto thing.
I got the below error message in the log which says "description":"Check if all required JSON fields were set."},"status":400}"
connectivity_1_ad306c4c315b | 2019-07-08 21:34:17,640 INFO [ID:AMQP_NO_PREFIX:TelemetrySenderImpl-13] o.e.d.s.c.m.a.AmqpPublisherActor akka://ditto-cluster/system/sharding/connection/23/Gal-Connection-10/pa/$a/c1/amqpPublisherActor2 - Response dropped, missing replyTo address: UnmodifiableExternalMessage [headers={content-type=application/vnd.eclipse.ditto+json, orig_adapter=hono-http, orig_address=/telemetry, device_id=4816, correlation-id=ID:AMQP_NO_PREFIX:TelemetrySenderImpl-13}, response=true, error=true, authorizationContext=null, topicPath=ImmutableTopicPath [namespace=unknown, id=unknown, group=things, channel=twin, criterion=errors, action=null, subject=null, path=unknown/unknown/things/twin/errors], enforcement=null, headerMapping=null, sourceAddress=null, payloadType=TEXT, textPayload={"topic":"unknown/unknown/things/twin/errors","headers":{"content-type":"application/vnd.eclipse.ditto+json","orig_adapter":"hono-http","orig_address":"/telemetry","device_id":"4816","correlation-id":"ID:AMQP_NO_PREFIX:TelemetrySenderImpl-13"},"path":"/","value":{"status":400,"error":"json.field.missing","message":"JSON did not include required </path> field!","description":"Check if all required JSON fields were set."},"status":400}, bytePayload=null']
Please let me know if I am missing something. Thank you in advance.!!
More Information :
The thingId in Ditto is org.eclipse.ditto:4816,
Tenant Id in Hono - tenantAdapters,
Device Registered in Hono - 4816 (tenantAdapters),
Auth Id of the device - sensor0101,
ConnectionId between Hono and Ditto - Gal-Connection-10
probably you are getting this failure since Ditto can't parse non ditto protocol messages. From reading your logs, I think your Ditto thing currently looks like this:
{
"thingId": "org.eclipse.ditto:4716",
"features": null
}
You could verify this by using a GET request to http://<your-ditto-address>:<your-ditto-gateway-port>/api/2/things/org.eclipse.ditto:4716.
Since you probably want to store the temperature and humidity to a feature of your thing, it would be best to not have the features as null, but already provide a feature with an ID for the value. Do this by creating a feature, e.g. with id 'environment' via a PUT to http://<your-ditto-address>:<your-ditto-gateway-port>/api/2/things/org.eclipse.ditto:4716/features/environment and content {}. Afterwards your thing should probably look like this:
{
"thingId": "org.eclipse.ditto:4716",
"features": {
"environment": {}
}
}
Now back to your initial question: Ditto will only understand ditto protocol messages and therefore doesn't know what to do with your JSON object.
To solve this problem you have two options:
1. adding a payload mapping script for incoming messages to your connection.
2. publishing a ditto protocol message instead of the simple JSON object. This would then look something like this:
vigkam#srvgal89:~$ curl -X POST -i -u sensor0101#tenantAdapters:mylittle -H 'Content-Type: application/json' -d '{ "topic": "org.eclipse.ditto/4716/things/twin/commands/modify", "path": "/features/environment", "value": {"temp": 23.09, "hum": 45.85} }' http://srvgal89.deri.ie:8080/telemetry
Note that I have specified the path /features/environment which will update the value of the environment feature of your thing.
Messages processed by Eclipse Ditto via AMQP (e.g. Hono) must be in the so called Ditto Protocol being a JSON based protocol which contains apart from other JSON fields the path field which is missing in your JSON (hence the error message "JSON did not include required </path> field!").
So you have at least 2 options to proceed:
Instead of your JSON format {"temp": 23.09, "hum": 45.85} send a message in Ditto Protocol, e.g. have a look here for an example
Use the Payload mapping feature of Ditto in order to specify a JavaScript function to invoke on all incoming messages from Hono in order to transform them into a valid Ditto Protocol message

Send one or more events to EventHub Azure

I am using an Logic App (LA) on Azure to query my db every 3 mins.
Then the LA uses an EventHub connector to send my query result, the table, to Azure Stream Analytics (ASA).
Normally the result table has around 100 rows, definitely many more in peak time.
I thought sending Eventhub message one row each time, would incur so many calls, hence perhaps delay the ASA's logic(?)
My questions are:
How to send multiple messages thru the LA's Eventhub Action Connector?
I see there's one option: Send one or more events to Eventhub, but wasn't able to figure out what to put in the content. Tried putting the table(the array). The following request body works.
e.g body:
[
{
"ContentData": "dHhuX2FnZV9yZXN1bHQ=",
"Properties": {
"tti_IngestTime": "2018-09-26T20:10:55.4480047+00:00",
"tti_SLAThresholdMins": 330,
"MinsPastSla": -6
}
},
{
"ContentData": "AhuBA2FnZV9yZXN1bHQ=",
"Properties": {
"tti_IngestTime": "2018-09-26T20:10:55.4480047+00:00",
"tti_SLAThresholdMins": 230,
"MinsPastSla": -5
}
}
]
Sending 100 events one by one to ASA, is there any performance concern?
Thank you!
Seem to find the answer.
(1) the JSON I am sending looks correct, and the post request to EvenHub is successful.
Post body is [{}, {}, {}], which is the correct format
(2) ASA couldn't read the stream is likely due to not able to deserialize the messages from EventHub.
I happen to change how I encode the base64 string for the "ContentData" send to the EventHub. It looks like the message sent to the EH,
{
"ContentData": "some base64() string",
"Properties": {}
},
the base64() needs to encode the "Properties" value, not anything else, for ASA to be able to deserialize the message.
It didn't work because I encoded using a random string instead of the value of the "Properties".

Stream Analytics to Event Hub - Unexpectedly concatenating events

I have a stream analytics job that is consuming an Event Hub of avro messages (we'll call this RawEvents), transforming/flattening the messages and firing them into a separate Event Hub (we'll call this FormattedEvents).
Each EventData instance in RawEvents consists of a single top level json object that has an array of more detailed events. This is a contrived example:
[{ "Events": [{ "dataOne": 123.0, "dataTwo": 234.0,
"subEventCode": 3, "dateTimeLocal": 1482170771, "dateTimeUTC":
1482192371 }, { "dataOne": 456.0, "dataTwo": 789.0,
"subEventCode": 20, "dateTimeLocal": 1482170771, "dateTimeUTC":
1482192371 }], "messageType": "myDeviceType-Events", "deviceID":
"myDevice", }]
The Stream Analytics job flattens the results and unpacks subEventCode, which is a bitmask. The results look something like this:
{"messagetype":"myDeviceType-Event","deviceid":"myDevice",eventid:1,"dataone":123,"datatwo":234,"subeventcode":6,"flag1":0,"flag2":1,"flag3":1,"flag4":0,"flag5":0,"flag6":0,"flag7":0,"flag8":0,"flag9":0,"flag10":0,"flag11":0,"flag12":0,"flag13":0,"flag14":0,"flag15":0,"flag16":0,"eventepochlocal":"2016-12-06T17:33:11.0000000Z","eventepochutc":"2016-12-06T23:33:11.0000000Z"} {"messagetype":"myDeviceType-Event","deviceid":"myDevice",eventid:2,"dataone":456,"datatwo":789,"subeventcode":8,"flag1":0,"flag2":0,"flag3":0,"flag4":1,"flag5":0,"flag6":0,"flag7":0,"flag8":0,"flag9":0,"flag10":0,"flag11":0,"flag12":0,"flag13":0,"flag14":0,"flag15":0,"flag16":0,"eventepochlocal":"2016-12-06T17:33:11.0000000Z","eventepochutc":"2016-12-06T23:33:11.0000000Z"}
I'm expecting to see two EventData instances when I pull messages from the FormattedEvents Event Hub. What I'm getting is a single EventData with both "flattened" events in the same message. This is expected behavior when targeting blob storage or Data Lake, but surprising when targeting an Event Hub. My expectation was for behavior similar to a Service Bus.
Is this expected behavior? Is there a configuration option to force the behavior if so?
Yes, this is expected behavior currently. The intent was to improve throughput trying to send as many events in an EventHub Message(EventData).
Unfortunately, there is no config option to override this behavior as of today. One possible way that may be worth trying is to leverage the concept of output partition key to something super unique (i.e. add this column to your query -- GetMetadataPropertyValue(ehInput, "EventId") as outputpk ) . Now specify that "outputpk" as PartitionKey in your output EventHub's ASA settings.
Let me know if that helps.
cheers
Chetan
I faced the same problem. Thanks for the answers of manually formatting the input message. I solved it with my colleague with a few lines of code, which removed line feed and carriage return. Then I replaced "}{" by "},{" and made it an array by adding "[" and "]" to both ends.
string modifiedMessage = myEventHubMessage.Replace("\n","").Replace("\r","");
modifiedMessage = "[" + modifiedMessage.Replace("}{","},{") + "]";
And then making the input as a list of objects according to its data structure:
List<TelemetryDataPoint> newDataPoints = new List<TelemetryDataPoint>();
try
{
newDataPoints = Newtonsoft.Json.JsonConvert.DeserializeObject<List<TelemetryDataPoint>>(modifiedMessage);
....
....

Resources