I have enabled http request logging for my webapp in Azure insights. I don't understand there is such a large response time disparity between identical requests, received just a few seconds or a few minutes apart. A log example with three records:
{ "time": "2022-07-20T06:08:41.7548330Z", "EventTime": "2022-07-20T06:08:41.7548330Z", "resourceId": "***", "properties": "{\"CsHost\":"***",\"CIp\":\"195.235.205.153\",\"SPort\":\"80\",\"CsUriStem\":\"\\/mensajes\",\"CsUriQuery\":\"desde=20220705T125027\",\"CsMethod\":\"GET\",\"TimeTaken\":1426,\"ScStatus\":\"200\",\"Result\":\"Success\",\"CsBytes\":\"976\",\"ScBytes\":\"302\",\"UserAgent\":\"RestSharp 104.2.0.0\",\"Cookie\":\"--\",\"CsUsername\":\"\",\"Referer\":\"\",\"ComputerName\":\"RD501AC5BF5D04\"}", "category": "AppServiceHTTPLogs", "EventStampType": "Stamp", "EventPrimaryStampName": "waws-prod-am2-325", "EventStampName": "waws-prod-am2-325d", "Host": "RD501AC5BF5D04", "EventIpAddress": "1*.*.*.*"}
{ "time": "2022-07-20T06:09:42.2283150Z", "EventTime": "2022-07-20T06:09:42.2283150Z", "resourceId": "***", "properties": "{\"CsHost\":"***",\"CIp\":\"195.235.205.153\",\"SPort\":\"80\",\"CsUriStem\":\"\\/mensajes\",\"CsUriQuery\":\"desde=20220705T125027\",\"CsMethod\":\"GET\",\"TimeTaken\":279,\"ScStatus\":\"200\",\"Result\":\"Success\",\"CsBytes\":\"976\",\"ScBytes\":\"302\",\"UserAgent\":\"RestSharp 104.2.0.0\",\"Cookie\":\"--\",\"CsUsername\":\"\",\"Referer\":\"\",\"ComputerName\":\"RD501AC5BF5D04\"}", "category": "AppServiceHTTPLogs", "EventStampType": "Stamp", "EventPrimaryStampName": "waws-prod-am2-325", "EventStampName": "waws-prod-am2-325d", "Host": "RD501AC5BF5D04", "EventIpAddress": "*.*.*.*"}
{ "time": "2022-07-20T06:10:15.0636460Z", "EventTime": "2022-07-20T06:10:15.0636460Z", "resourceId": "***", "properties": "{\"CsHost\":"***",\"CIp\":\"195.235.205.153\",\"SPort\":\"80\",\"CsUriStem\":\"\\/mensajes\",\"CsUriQuery\":\"desde=20220705T125027\",\"CsMethod\":\"GET\",\"TimeTaken\":2629,\"ScStatus\":\"200\",\"Result\":\"Success\",\"CsBytes\":\"976\",\"ScBytes\":\"302\",\"UserAgent\":\"RestSharp 104.2.0.0\",\"Cookie\":\"--\",\"CsUsername\":\"\",\"Referer\":\"\",\"ComputerName\":\"RD501AC5BF5D04\"}", "category": "AppServiceHTTPLogs", "EventStampType": "Stamp", "EventPrimaryStampName": "waws-prod-am2-325", "EventStampName": "waws-prod-am2-325d", "Host": "RD501AC5BF5D04", "EventIpAddress": "*.*.*.*"}
The three requests are the same and therefore trigger the same process on the server side (same endpoint, made from the same origin). The result can be seen in the 'timeTaken' field: 1426ms/279ms/2629ms Any suggestion is appreciated.
According to Microsoft-Documentation it say,
If they remain idle for a predetermined amount of time, web apps are by default unloaded. You can activate the Always On feature in Basic and Standard service levels to keep the app constantly loaded.This eliminates longer load times after the app is idle.
Open your Web app=> Then click on Configuration=>Then click on General settings=> then on.
References:
Smart detection - performance anomalies - Azure Monitor | Microsoft Docs
Exploring performance issues with Azure Application Insights | by Thomas Weiss | Medium
Related
I have tried so many things to do this and nothing works, but when I run my Azure function I get all these logs its far too much noise
What is the correct way to get rid of them once and for all?
I have this in my host.json
"logging": {
"logLevel": {
"default": "Information",
"Microsoft": "Warning",
"System": "Warning",
"Host": "Error",
"Function": "Error",
"Host.Aggregator": "Information"
},
"Serilog": {
"MinimumLevel": "Information",
"WriteTo": [
{
"Name": "Console",
"Args": {
"outputTemplate": "{Timestamp:HH:mm:ss} {Level} | {RequestId} - {Message}{NewLine}{Exception}"
}
}
]
}
},
and this is in my localsettings.json
"logging": {
"logLevel": {
"Microsoft.Azure.WebJobs.Script.WebHost.Middleware.SystemTraceMiddleware": "Error",
"Worker.rpcWorkerProcess": "Error"
}
},
You have to modify the host.json for configuring minimum level of logs either locally or in Azure Cloud.
I can see you have defined the log values as Information, Warning to the Log Level attributes where information gives the general flow of the Application Execution from Start to End like Host level flow, Application-Level Flow logs.
Host.Aggregator generates more than the trace level metrics if assigned with the value of Information.
You have to remove/disable the unnecessary modules which are not required for logs or keep that modules log level to None if not required for the current situation and change the log level to minimum which also reduces your Logs Consumption if deployed to Azure Cloud.
I found a similar SO Issue 70690850 that shows the minimum log levels should be modified in the host.json which reduces the number of logs both locally and given the techniques of Application Insights and this MS Doc for more information on Log Levels Configuration.
I have an Azure web app service with enabled Health-check feature and the following auto-scale configuration:
{
"name": "Auto created scale condition",
"capacity": {
"minimum": 2,
"maximum": 10,
"default": 2
},
"rules": [
{
"metricTrigger": {
"metricName": "CpuPercentage",
"metricNamespace": "",
"metricResourceUri": "[resourceId('Microsoft.Web/serverfarms', parameters('ServicePlanName'))]",
"timeGrain": "PT1M",
"statistic": "Average",
"timeWindow": "PT10M",
"timeAggregation": "Average",
"operator": "GreaterThan",
"threshold": 70
},
"scaleAction": {
"direction": "Increase",
"type": "ChangeCount",
"value": "1",
"cooldown": "PT5M"
}
}
]
}
The question: Does Azure count unhealthy instances for CPU loading while scaling up and down? I don't see this in official documentation and also I did some tests making one instance unhealthy but have got some unclear results.
So, let's imagine an unhealthy instance has 0% CPU usage and the healthy one has 90% CPU. So, totally on average we have (0%+90%)/2=45%. Will scale-out rule work in this case?
Thanks
The scale out logic does not look at whether your instances are healthy. Likewise, if an instance becomes unhealthy because of something like a dead lock in your code and causes the instance to reach 100% CPU usage, your scale out logic would trigger adding an additional instance, even though the increased CPU usage isn't being triggered by additional users.
To mitigate unhealthy instances quickly and automatically until you are able to resolve the issue within your code, we typically recommend customers turn on auto-heal and set it to restart the site process based on the parameters that you feel are occurring such as http errors. For more information on auto-heal, please see here.
I'm trying to consume data from hono. I do so by following the guide on Starting a consumer on hono documentation.
I'm currently trying to subscribe to all tenants by add --tenant.id=* at the end of the the mvn command. This results in following command:
mvn spring-boot:run -Drun.arguments=--hono.client.host=localhost,--hono.client.username=consumer#HONO,--hono.client.password=verysecret,--destination.TopicTemplate=gw/\!{tenant}/\!{device}/alp,--destination.Hoscalhost,--destination.Port=11883,--tenant.id=*
I'm not getting any messages when I subscribe like this. When I subscribe using the example command (only for DEFAULT_TENANT), I'm consuming the messages.
The current user permission looks like this:
"consumer#HONO": {
"mechanism": "PLAIN",
"password": "verysecret",
"authorities": [ "application" ]
}
The current application role looks like this:
"application": [
{
"resource": "telemetry/*",
"activities": [ "READ" ]
},
{
"resource": "event/*",
"activities": [ "READ" ]
},
{
"resource": "control/*",
"activities": [ "READ", "WRITE" ]
}
Both of them are still the original ones from Hono github.
EDIT: The consumer also subscribes to event/tenant. In my case this is event/. Events published on topic event/DEFAULT_TENANT and event/MY_TENANT are consumed. However, the consumer for telemetry/ seems not to be registered.
I've finally found out what was going on.
It seems the message is blocked in the QPID dispatch router because of folowwing error: "Parse tree match not found".
This can be resolved by changing the qpid configuration. In this configuration you should be able to find following records:
["linkRoute", {
"prefix": "event/",
"direction": "in",
"connection": "broker"
}],
["linkRoute", {
"prefix": "event/",
"direction": "out",
"connection": "broker"
}],
["address", {
"prefix": "telemetry/",
"distribution": "balanced"
}],
It creates linkroutes (in and out) for event topic but not for the telemetry topic. Adding these records for the telemetry topic resolves the problem.
["linkRoute", {
"prefix": "event/",
"direction": "in",
"connection": "broker"
}],
["linkRoute", {
"prefix": "event/",
"direction": "out",
"connection": "broker"
}],
["linkRoute", {
"prefix": "telemetry/",
"direction": "in",
"connection": "broker"
}],
["linkRoute", {
"prefix": "telemetry/",
"direction": "out",
"connection": "broker"
}],
["address", {
"prefix": "telemetry/",
"distribution": "balanced"
}],
Hono does not (as of now) support consuming messages of all tenants. The consumer is always scoped to a single tenant only. This is also reflected in the (northbound) Telemetry and Event API specifications.
The usage of wildcard characters in order to receive data for multiple/all tenants is not supported. The change you have made to the Dispatch Router configuration may have led you to believe that it does work indeed. However, defining the telemetry address to use link routing instead of the default message routing has some consequences you should be aware of:
All telemetry messages will be routed to the message broker (Artemis) instead of being routed directly to consumers attached to the Dispatch Router. This means that all messages will be written to a queue/topic in Artemis. Depending on the Artemis configuration this might also mean that (telemetry) messages get persisted which will have quite a negative impact on throughput.
Your clients/consumers will now explicitly depend on the (Artemis) broker's support for wildcards being used in AMQP 1.0 link source addresses to receive messages from multiple addresses. While this might be what you want to achieve in the first place, beware that it ties your application to the specific implementation of the AMQP Messaging Network (in this case Artemis) which is not part of Hono.
I just now started using Thingsboard and I came across this one,https://thingsboard.io/docs/iot-gateway/getting-started/. I have implemented it but the problems that I'm facing are,
1.I can transmit only one Key-value pair. How can I transmit multiple key-value sensor data?
2.Also if there is any other way to access the Cassandra Database so that I can retrieve all mine data to Thingsboard.
Please help. Thanking you.
You are asking two very different things.
1) You can transmit more key-value pairs at once by correctly mapping the gateway incoming messages. I suppose you are working with MQTT protocol. The default mapping for this protocol is specified in /etc/tb-gateway/conf/mqtt-config.json. This file specifies how to translate the incoming MQTT messages from the broker into the ThingsBoard key-value format, before sending to the server instance of ThingsBoard.
To map more than one reading from sensor, you can do somethings like this:
{
"brokers": [
{
"host": "localhost",
"port": 1883,
"ssl": false,
"retryInterval": 5000,
"credentials": {
"type": "anonymous"
},
"mapping": [
{
"topicFilter": "WeatherSensors",
"converter": {
"type": "json",
"filterExpression": "",
"deviceNameJsonExpression": "${$.WeatherStationName}",
"timeout": 120000,
"timeseries": [
{
"type": "double",
"key": "temperature",
"value": "${$.temperature}"
},
{
"type": "double",
"key": "humidity",
"value": "${$.humidity}"
}
]
}
}
]
}
]
}
This way, if you send a message like {"WeatherStationName":"test", "temperature":25, "humidity":40} to the topic WeatherSensors you will see the two key-value pairs in ThingsBoard server, in a device named "test".
2) The best way to access data stored in the internal ThingsBoard server is via REST API, so that you can query any ThingsBoard instance with the same piece of code regardless of the technology used for the database (Cassandra, PostgreSQL, etc.). You can find a Python example in this repo.
The alternative is to use a specific query language for the database, such as SQL for PostgreSQL or CQL for Cassandra.
For example, humidity, temperature, gas.
In this case you use one access token/single mqtt session and send data in single json like this
{"humidity":42.2, "temperature":23.3, "gas":45}
If you have multiple sensors attached to single device, send them like this
{"sensorA.humidity":42.2, "sensorB.temperature":23.3, "sensorC.gas":45}
Available topics are static and listed here:
https://thingsboard.io/docs/reference/mqtt-api/#telemetry-upload-api
When I add a replication to CouchDB, it doesn't start. i.e. I get the following doc after saving:
{
"_id": "xxx",
"_rev": "yyy",
"target": "https://user:pswd.domain/db",
"source": "db",
"create_target": true,
"continuous": true,
"user_ctx": {
"name": "admin",
"roles": [
"_admin"
]
},
"owner": "admin"
}
Usually after creating a replication, the replication is triggered and the doc updated to include:
"_replication_state": "triggered" or "error",
"_replication_state_time": "some time",
"_replication_id": "some ID"
I am using CouchDB 1.6.0 on Ubuntu 16.04. What could cause this to happen? Replication was working fine until about an hour ago when 80 of 140 or so replications failed at once.
There are 60 replications that are seen as 'triggered' in couch. But the _active_tasks endpoint only shows 46.
As it turned out our server was experiencing high traffic from super dodgy origins. This was causing replications to timeout, and not allowing replications to restart. The Nginx access logs seemed to show repeated attempts at fishing for insecure PHP settings and open MySQL databases