Subscribe to all tenants using a wildcard in Eclipse-hono - eclipse-hono

I'm trying to consume data from hono. I do so by following the guide on Starting a consumer on hono documentation.
I'm currently trying to subscribe to all tenants by add --tenant.id=* at the end of the the mvn command. This results in following command:
mvn spring-boot:run -Drun.arguments=--hono.client.host=localhost,--hono.client.username=consumer#HONO,--hono.client.password=verysecret,--destination.TopicTemplate=gw/\!{tenant}/\!{device}/alp,--destination.Hoscalhost,--destination.Port=11883,--tenant.id=*
I'm not getting any messages when I subscribe like this. When I subscribe using the example command (only for DEFAULT_TENANT), I'm consuming the messages.
The current user permission looks like this:
"consumer#HONO": {
"mechanism": "PLAIN",
"password": "verysecret",
"authorities": [ "application" ]
}
The current application role looks like this:
"application": [
{
"resource": "telemetry/*",
"activities": [ "READ" ]
},
{
"resource": "event/*",
"activities": [ "READ" ]
},
{
"resource": "control/*",
"activities": [ "READ", "WRITE" ]
}
Both of them are still the original ones from Hono github.
EDIT: The consumer also subscribes to event/tenant. In my case this is event/. Events published on topic event/DEFAULT_TENANT and event/MY_TENANT are consumed. However, the consumer for telemetry/ seems not to be registered.

I've finally found out what was going on.
It seems the message is blocked in the QPID dispatch router because of folowwing error: "Parse tree match not found".
This can be resolved by changing the qpid configuration. In this configuration you should be able to find following records:
["linkRoute", {
"prefix": "event/",
"direction": "in",
"connection": "broker"
}],
["linkRoute", {
"prefix": "event/",
"direction": "out",
"connection": "broker"
}],
["address", {
"prefix": "telemetry/",
"distribution": "balanced"
}],
It creates linkroutes (in and out) for event topic but not for the telemetry topic. Adding these records for the telemetry topic resolves the problem.
["linkRoute", {
"prefix": "event/",
"direction": "in",
"connection": "broker"
}],
["linkRoute", {
"prefix": "event/",
"direction": "out",
"connection": "broker"
}],
["linkRoute", {
"prefix": "telemetry/",
"direction": "in",
"connection": "broker"
}],
["linkRoute", {
"prefix": "telemetry/",
"direction": "out",
"connection": "broker"
}],
["address", {
"prefix": "telemetry/",
"distribution": "balanced"
}],

Hono does not (as of now) support consuming messages of all tenants. The consumer is always scoped to a single tenant only. This is also reflected in the (northbound) Telemetry and Event API specifications.
The usage of wildcard characters in order to receive data for multiple/all tenants is not supported. The change you have made to the Dispatch Router configuration may have led you to believe that it does work indeed. However, defining the telemetry address to use link routing instead of the default message routing has some consequences you should be aware of:
All telemetry messages will be routed to the message broker (Artemis) instead of being routed directly to consumers attached to the Dispatch Router. This means that all messages will be written to a queue/topic in Artemis. Depending on the Artemis configuration this might also mean that (telemetry) messages get persisted which will have quite a negative impact on throughput.
Your clients/consumers will now explicitly depend on the (Artemis) broker's support for wildcards being used in AMQP 1.0 link source addresses to receive messages from multiple addresses. While this might be what you want to achieve in the first place, beware that it ties your application to the specific implementation of the AMQP Messaging Network (in this case Artemis) which is not part of Hono.

Related

Azure Insights: webapp request time

I have enabled http request logging for my webapp in Azure insights. I don't understand there is such a large response time disparity between identical requests, received just a few seconds or a few minutes apart. A log example with three records:
{ "time": "2022-07-20T06:08:41.7548330Z", "EventTime": "2022-07-20T06:08:41.7548330Z", "resourceId": "***", "properties": "{\"CsHost\":"***",\"CIp\":\"195.235.205.153\",\"SPort\":\"80\",\"CsUriStem\":\"\\/mensajes\",\"CsUriQuery\":\"desde=20220705T125027\",\"CsMethod\":\"GET\",\"TimeTaken\":1426,\"ScStatus\":\"200\",\"Result\":\"Success\",\"CsBytes\":\"976\",\"ScBytes\":\"302\",\"UserAgent\":\"RestSharp 104.2.0.0\",\"Cookie\":\"--\",\"CsUsername\":\"\",\"Referer\":\"\",\"ComputerName\":\"RD501AC5BF5D04\"}", "category": "AppServiceHTTPLogs", "EventStampType": "Stamp", "EventPrimaryStampName": "waws-prod-am2-325", "EventStampName": "waws-prod-am2-325d", "Host": "RD501AC5BF5D04", "EventIpAddress": "1*.*.*.*"}
{ "time": "2022-07-20T06:09:42.2283150Z", "EventTime": "2022-07-20T06:09:42.2283150Z", "resourceId": "***", "properties": "{\"CsHost\":"***",\"CIp\":\"195.235.205.153\",\"SPort\":\"80\",\"CsUriStem\":\"\\/mensajes\",\"CsUriQuery\":\"desde=20220705T125027\",\"CsMethod\":\"GET\",\"TimeTaken\":279,\"ScStatus\":\"200\",\"Result\":\"Success\",\"CsBytes\":\"976\",\"ScBytes\":\"302\",\"UserAgent\":\"RestSharp 104.2.0.0\",\"Cookie\":\"--\",\"CsUsername\":\"\",\"Referer\":\"\",\"ComputerName\":\"RD501AC5BF5D04\"}", "category": "AppServiceHTTPLogs", "EventStampType": "Stamp", "EventPrimaryStampName": "waws-prod-am2-325", "EventStampName": "waws-prod-am2-325d", "Host": "RD501AC5BF5D04", "EventIpAddress": "*.*.*.*"}
{ "time": "2022-07-20T06:10:15.0636460Z", "EventTime": "2022-07-20T06:10:15.0636460Z", "resourceId": "***", "properties": "{\"CsHost\":"***",\"CIp\":\"195.235.205.153\",\"SPort\":\"80\",\"CsUriStem\":\"\\/mensajes\",\"CsUriQuery\":\"desde=20220705T125027\",\"CsMethod\":\"GET\",\"TimeTaken\":2629,\"ScStatus\":\"200\",\"Result\":\"Success\",\"CsBytes\":\"976\",\"ScBytes\":\"302\",\"UserAgent\":\"RestSharp 104.2.0.0\",\"Cookie\":\"--\",\"CsUsername\":\"\",\"Referer\":\"\",\"ComputerName\":\"RD501AC5BF5D04\"}", "category": "AppServiceHTTPLogs", "EventStampType": "Stamp", "EventPrimaryStampName": "waws-prod-am2-325", "EventStampName": "waws-prod-am2-325d", "Host": "RD501AC5BF5D04", "EventIpAddress": "*.*.*.*"}
The three requests are the same and therefore trigger the same process on the server side (same endpoint, made from the same origin). The result can be seen in the 'timeTaken' field: 1426ms/279ms/2629ms Any suggestion is appreciated.
According to Microsoft-Documentation it say,
If they remain idle for a predetermined amount of time, web apps are by default unloaded. You can activate the Always On feature in Basic and Standard service levels to keep the app constantly loaded.This eliminates longer load times after the app is idle.
Open your Web app=> Then click on Configuration=>Then click on General settings=> then on.
References:
Smart detection - performance anomalies - Azure Monitor | Microsoft Docs
Exploring performance issues with Azure Application Insights | by Thomas Weiss | Medium

Azure function takes a really long time to trigger

We have an Azure function v3 running Node, consumption plan, with an input trigger connected to a cosmos database. The function.json looks like this:
{
"disabled": false,
"bindings": [
{
"type": "cosmosDBTrigger",
"name": "productDocuments",
"collectionName": "products",
"direction": "in",
"connectionStringSetting": "DB_CONNECTION_STRING",
"databaseName": "product-management",
"createLeaseCollectionIfNotExists": true,
"maxItemsPerInvocation": 1
},
{
"name": "productDocument",
"type": "cosmosDB",
"databaseName": "product-management",
"collectionName": "products",
"createIfNotExists": true,
"connectionStringSetting": "_DB_CONNECTION_STRING",
"direction": "out"
}
],
"scriptFile": "dist/nameOfFunction.js"
}
But this trigger is working really, really slow and unreliable. If we add an item to the DB it sometimes triggers straight away, sometimes it seems to take hours and sometimes not at all. I am manually monitoring the cosmos db so I can see that items are added.
I am looking at this page, and most of the time nothing happens. I don't know how else to debug this
Should it really take hours for an invocation to show up here? Or is it the trigger that's unreliable?
General guidance is in this doc: https://learn.microsoft.com/azure/cosmos-db/troubleshoot-changefeed-functions#my-changes-take-too-long-to-be-received
What happens on Consumption Plan is that, after a period of inactivity, instances are deprovisioned. When a new instance is provisioned, it hits a cold start.
The key part here is that, when your instances are deprovisioned, they are not checking the Change Feed for events, so how does Functions know when to "wake them up"?
There is a periodic check done by an external component that checks to see if there are new changes, if there are new, then it would provision new instances of your Function to start consuming them.
This external component in your case, could be having an issue or delays in this checks.
If you have no Function logs for an hour even though you are making changes to the monitored collection, I would try to contact Azure Support to understand why is your Function not "waking up".
One of the known issues I've heard about was related to where the Cosmos DB Connection Strings were stored. Apparently this component at some point (maybe it's already fixed) had a problem where it could not access the Connection String if it was saved in "Connection Strings" section of the Function configuration, but was looking for it only on the "App Settings". In this cases, it could not wake up the Function and the Function only woke up if someone opened it on the Azure Portal. My recommendation would be to check where are you storing your connection string and see if you can move it to "App Settings" and see how it behaves.
Our problem with this was that we had two separate functions that both had a CosmosDBTrigger on the same collectionm but used the same lease, and apparently you can't do that. So it was solved by setting two separate leases (we used the leaseCollectionPrefix in the function.json.)

Refresh IP address for Azure VM via REST API

I am trying to your the REST API to change the IP of my Ubuntu Virtual Machine on Azure.
In the web interface, stopping and starting the VM usually causes the public IP to change. However, just stopping and starting the VM with curl requests to the API does not trigger an IP change.
I can request the current status of the IP configuration using a GET request (see the docs here), but I cannot find any function to refresh it. I also tried setting the IP to static and back to dynamic before turning the VM back on, that also did not work.
I found this similar question here, but when I tried that approach, I got the following error message:
{ "error": {
"code": "IpConfigDeleteNotSupported",
"message": "IP Configuration ipconfig1 cannot be deleted. Deletion and renaming of primary IP Configuration is not supported",
"details": [] }
I have also created a secondary IP configuration. The first one is called ipconfig1, the second I named "alternative". This seems to be a second network interface. I have associated a second IP address with that second network interface. But I am still getting the same error.
My final request looks like this:
curl -X PUT -H "Authorization: Bearer MYTOKEN" -H "Content-Type: application/json" -d '{ "name": "NETWORKINTERFACE542", "id": "GROUP", "location": "westeurope", "properties": { "provisioningState": "Succeeded", "ipConfigurations": [ { "name": "alternative", "properties": { "privateIPAllocationMethod": "Dynamic", "subnet": { "id": "/subscriptions/xx-xx-xx-xx/resourceGroups/GROUP/providers/Microsoft.Network/virtualNetworks/GROUP-vnet/subnets/default" }, "primary": true, "privateIPAddressVersion": "IPv4" } } ], "dnsSettings": { "dnsServers": [], "appliedDnsServers": [] }, "enableAcceleratedNetworking": true, "enableIPForwarding": false }, "type": "Microsoft.Network/networkInterfaces" }' https://management.azure.com/subscriptions/xx-xx-xx-xx/resourceGroups/GROUP/providers/Microsoft.Network/networkInterfaces/NETWORKINTERFACE542?api-version=2020-07-01
(Where the CAPS terms are stand-ins for my actual variable names)
I am still getting the same error, even though I am not even referencing ipconfig1 in my request.
Is there any way to achieve an IP reset?
As your mentioned: In the web interface, stopping and starting the VM usually causes the public IP to change.
Generally, the stop operation in the web UI actually does deallocate operation, so you need to use REST API Deallocate and Start to trigger the public IP address changed.
Virtual Machines - Deallocate
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/deallocate?api-version=2020-12-01
Virtual Machines - Start
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/start?api-version=2020-12-01

Does Scale out in Azure web app service with enabled health-check count CPU on unhealthy instances?

I have an Azure web app service with enabled Health-check feature and the following auto-scale configuration:
{
"name": "Auto created scale condition",
"capacity": {
"minimum": 2,
"maximum": 10,
"default": 2
},
"rules": [
{
"metricTrigger": {
"metricName": "CpuPercentage",
"metricNamespace": "",
"metricResourceUri": "[resourceId('Microsoft.Web/serverfarms', parameters('ServicePlanName'))]",
"timeGrain": "PT1M",
"statistic": "Average",
"timeWindow": "PT10M",
"timeAggregation": "Average",
"operator": "GreaterThan",
"threshold": 70
},
"scaleAction": {
"direction": "Increase",
"type": "ChangeCount",
"value": "1",
"cooldown": "PT5M"
}
}
]
}
The question: Does Azure count unhealthy instances for CPU loading while scaling up and down? I don't see this in official documentation and also I did some tests making one instance unhealthy but have got some unclear results.
So, let's imagine an unhealthy instance has 0% CPU usage and the healthy one has 90% CPU. So, totally on average we have (0%+90%)/2=45%. Will scale-out rule work in this case?
Thanks
The scale out logic does not look at whether your instances are healthy. Likewise, if an instance becomes unhealthy because of something like a dead lock in your code and causes the instance to reach 100% CPU usage, your scale out logic would trigger adding an additional instance, even though the increased CPU usage isn't being triggered by additional users.
To mitigate unhealthy instances quickly and automatically until you are able to resolve the issue within your code, we typically recommend customers turn on auto-heal and set it to restart the site process based on the parameters that you feel are occurring such as http errors. For more information on auto-heal, please see here.

Posting multiple data in IoT gateway Thingsboard

I just now started using Thingsboard and I came across this one,https://thingsboard.io/docs/iot-gateway/getting-started/. I have implemented it but the problems that I'm facing are,
1.I can transmit only one Key-value pair. How can I transmit multiple key-value sensor data?
2.Also if there is any other way to access the Cassandra Database so that I can retrieve all mine data to Thingsboard.
Please help. Thanking you.
You are asking two very different things.
1) You can transmit more key-value pairs at once by correctly mapping the gateway incoming messages. I suppose you are working with MQTT protocol. The default mapping for this protocol is specified in /etc/tb-gateway/conf/mqtt-config.json. This file specifies how to translate the incoming MQTT messages from the broker into the ThingsBoard key-value format, before sending to the server instance of ThingsBoard.
To map more than one reading from sensor, you can do somethings like this:
{
"brokers": [
{
"host": "localhost",
"port": 1883,
"ssl": false,
"retryInterval": 5000,
"credentials": {
"type": "anonymous"
},
"mapping": [
{
"topicFilter": "WeatherSensors",
"converter": {
"type": "json",
"filterExpression": "",
"deviceNameJsonExpression": "${$.WeatherStationName}",
"timeout": 120000,
"timeseries": [
{
"type": "double",
"key": "temperature",
"value": "${$.temperature}"
},
{
"type": "double",
"key": "humidity",
"value": "${$.humidity}"
}
]
}
}
]
}
]
}
This way, if you send a message like {"WeatherStationName":"test", "temperature":25, "humidity":40} to the topic WeatherSensors you will see the two key-value pairs in ThingsBoard server, in a device named "test".
2) The best way to access data stored in the internal ThingsBoard server is via REST API, so that you can query any ThingsBoard instance with the same piece of code regardless of the technology used for the database (Cassandra, PostgreSQL, etc.). You can find a Python example in this repo.
The alternative is to use a specific query language for the database, such as SQL for PostgreSQL or CQL for Cassandra.
For example, humidity, temperature, gas.
In this case you use one access token/single mqtt session and send data in single json like this
{"humidity":42.2, "temperature":23.3, "gas":45}
If you have multiple sensors attached to single device, send them like this
{"sensorA.humidity":42.2, "sensorB.temperature":23.3, "sensorC.gas":45}
Available topics are static and listed here:
https://thingsboard.io/docs/reference/mqtt-api/#telemetry-upload-api

Resources