I have setup an external metrics server in AKS (Azure Kubernetes Service). I could see the metric when querying the external metric api server.
kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/default/queuemessages" | jq .
{
"kind": "ExternalMetricValueList",
"apiVersion": "external.metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/external.metrics.k8s.io/v1beta1/namespaces/default/queuemessages"
},
"items": [
{
"metricName": "queuemessages",
"metricLabels": null,
"timestamp": "2020-04-09T14:04:08Z",
"value": "0"
}
]
}
I want to know how to delete this metric from the external metrics server?
It looks like you are interested in the Queue Bus metrics.
I found this issue that is still open talking about a big delay in the queue messages metric to get populated.
https://github.com/Azure/azure-k8s-metrics-adapter/issues/63
the way custom-metric adapters work, they will query metrics from external services and make them available over a custom api on the Kubernetes API-server using a APiService resource.
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-metrics-apis
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects
The adapter implements a query to the external service (Service Bus in your case)
base on the spec, the get metric should never fail, so receiving a NULL could be because you don't have a valid connection OR there isn't available m metrics yet.
https://github.com/kubernetes-sigs/custom-metrics-apiserver/blob/master/docs/getting-started.md#writing-a-provider
First, there's a method for listing all metrics available at any point in time. It's used to populate the discovery information in the API, so that clients can know what metrics are available. It's not allowed to fail (it doesn't return any error), and it should return quickly, so it's suggested that you update it asynchronously in real-world code.
Could you explain why you are looking to delete the metrics ? In the end, I don't think it is possible since the adapter is there to fetch and report.
Related
While answering Retrieve quota for Microsoft Azure App Service Storage, I stumbled upon the FileSystemUsage metric for Microsoft.Web/sites resource type. As per the documentation, this metric should return Percentage of filesystem quota consumed by the app..
However when I execute Metrics - List REST API operation (and also in the Metrics blade in Azure Portal) for my web app, the value is always returned as zero. I checked it against a number of web apps in my Azure Subscriptions and for all of them the result was zero. I am curious to know the reason for that.
In contrast, if I execute App Service Plans - List Usages REST API operation, it returns me the correct value. For example, if my App Service Plan is S2, I get following response back:
{
"unit": "Bytes",
"nextResetTime": "9999-12-31T23:59:59.9999999Z",
"currentValue": 815899648,
"limit": 536870912000,//500 GB (50 GB/instance x max 10 instances)
"name": {
"value": "FileSystemStorage",
"localizedValue": "File System Storage"
}
},
Did I misunderstand FileSystemUsage for Web Apps? Would appreciate if someone can explain the purpose of this metric? If it is indeed what is documented, then why the API is returning zero value?
This should be the default behavior, please check this doc Understand metrics:
Note
File System Usage is a new metric being rolled out globally, no data
is expected unless your app is hosted in an App Service Environment.
So currently this metric File System Usage should only be working on ASE.
Is there a way to learn how many RUs were consumed when executing a query using the Cassandra api on CosmosDB?
(My understanding is normal API returns this in an additional HTTP header, but obviously that does not work with CQL as wire protocol..)
The only way I know how to get request charge for specific CQL queries in Cosmos is to turn on diagnostic logging. Then each query you run will result in a diagnostic log entry like this.
{ "time": "2020-03-30T23:55:10.9579593Z", "resourceId": "/SUBSCRIPTIONS/<your_subscription_ID>/RESOURCEGROUPS/<your_resource_group>/PROVIDERS/MICROSOFT.DOCUMENTDB/DATABASEACCOUNTS/<your_database_account>", "category": "CassandraRequests", "operationName": "QuerySelect", "properties": {"activityId": "6b33771c-baec-408a-b305-3127c17465b6","opCode": "<empty>","errorCode": "-1","duration": "0.311900","requestCharge": "1.589237","databaseName": "system","collectionName": "local","retryCount": "<empty>","authorizationTokenType": "PrimaryMasterKey","address": "104.42.195.92","piiCommandText": "{"request":"SELECT key from system.local"}","userAgent": """"}}
For details on how to configure Diagnostic Logging in Cosmos DB see, Monitor Azure Cosmos DB data by using diagnostic settings in Azure
Hope this is helpful.
I am attempting to query Event Hub Firewall IP Rules using Azure Policy's Resource Graph. I currently have provisioned an Event Hub with the following Firewall IP Rule.
{
"type": "Microsoft.EventHub/namespaces/ipfilterrules",
"apiVersion": "2018-01-01-preview",
"name": "[concat(parameters('namespaces_myeventhub_name'), '/e51110a0-c074-43b3-85b7-b43e2eab4d9b')]",
"location": "West US 2",
"dependsOn": [
"[resourceId('Microsoft.EventHub/namespaces', parameters('namespaces_myeventhub_name'))]"
],
"properties": {
"ipMask": "47.xxx.xxx.xxx",
"action": "Accept",
"filterName": "e51110a0-c074-43b3-85b7-b43e2eax4d9b"
}
}
A query for
"where type =~ 'Microsoft.EventHub/namespaces'"
will reveal my Event Hub without any information of firewall IP rules. And furthermore a query for
where type =~ 'Microsoft.EventHub/namespaces/ipfilterrules'
returns nothing. I would like to be able to query this information using resource graph and eventually write an Azure Policy against these properties. I have searched for possible aliases with this information using the following
"where type =~ 'Microsoft.EventHub/namespaces' | limit 1 | project aliases"
but the list it returns includes no information of firewall IP rules for Event Hubs. This seems like basic information that should be available in Resource Graph... What am I missing?
After test , unfortunately,only the level of event hub namespace could be queried via Azure Resource Graph APIs and you can not query ipfilterrules via Azure Resource Graph directly,
please refer to below solution as workaround:
1:Query all event hub namespaces under susbcription
For example:
https://management.azure.com/subscriptions//providers/Microsoft.EventHub/namespaces?api-version=2018-01-01-preview
2: Query all ipfilterrules under event hub namespace and filter ipfilterrules one by one in your program.
For example
https://management.azure.com/subscriptions//resourceGroups/ericm/providers/Microsoft.EventHub/namespaces//ipfilterrules?api-version=2018-01-01-preview
Reference:
https://github.com/Azure/azure-rest-api-specs/blob/master/specification/eventhub/resource-manager/Microsoft.EventHub/preview/2018-01-01-preview/examples/NameSpaces/IPFilterRule/EHNameSpaceIPFilterRuleListAll.json
Hopefully it is helpful for your concern.
How can you fetch data from an http rest endpoint as an input for a data factory?
My use case is to fetch new data hourly from a rest HTTP GET and update/insert it into a document db in azure.
Can you just create an endpoint like this and put in the rest endpoint?
{
"name": "OnPremisesFileServerLinkedService",
"properties": {
"type": "OnPremisesFileServer",
"description": "",
"typeProperties": {
"host": "<host name which can be either UNC name e.g. \\\\server or localhost for the same machine hosting the gateway>",
"gatewayName": "<name of the gateway that will be used to connect to the shared folder or localhost>",
"userId": "<domain user name e.g. domain\\user>",
"password": "<domain password>"
}
}
}
And what kind of component do I add to create the data transformation job - I see that there is a a bunch of things like hdinsight, data lake and batch but not sure what the differences or appropriate service would be to simply upsert the new set into the azure documentDb.
I think the simplest way will be to use the Azure Logic Apps.
You can make a call to any Restfull service using the Http Connector in Azure Logic App connectors.
So you can do GET and POST/PUT etc in a flow based on schedule or based on some other GET listener:
Here is the documentation for it:
https://azure.microsoft.com/en-us/documentation/articles/app-service-logic-connector-http/
To do this with Azure Data Factory you will need to utilize Custom Activities.
Similar question here:
Using Azure Data Factory to get data from a REST API
If Azure Data Factory is not an absolute requirement Aram's suggestion might serve you better utilizing Logic Apps.
Hope that helps.
This can be achieved with Data Factory. This is especially good if you want to run batches on a schedule and have a single place for monitoring and management. There is sample code in our GitHub repo for an HTTP loader to blob here https://github.com/Azure/Azure-DataFactory. Then, the act of moving data from the blob to docdb will do the insert for you using our DocDB connector. There is a sample on how to use this connector here https://azure.microsoft.com/en-us/documentation/articles/data-factory-azure-documentdb-connector/ Here are the brief steps you will take to fulfill your scenario
Create a custom .NET activity to get your data to blob.
Create a linked service of type DocumentDb.
Create linked service of type AzureStorage.
Use input dataset of type AzureBlob.
Use output dataset of type DocumentDbCollection.
Create and schedule a pipeline that includes your custom activity, and a Copy Activity that uses BlobSource and DocumentDbCollectionSink schedule the activities to the required frequency and availability of the datasets.
Aside from that, choosing where to run your transforms (HDI, Data Lake, Batch) will depend on your I/o and perf reqs. You can choose to run your custom activity on Azure Batch or HDI in this case.
We have a load balanced set in Azure for our web application, which load balances port 80 and 443 between two VMs. We have used the default tcp probe. Is there a way to get the current status of the probe for the load balancer from Azure?
I know I could just check each individual machines and do a probe myself, but I want to know if we could see what Azure sees for each machine.
Well as of 2018-06-05 this feature is not available in the Azure Portal. Today you have to configure "Diagnostic Logs" for the Load Balancer. If you choose the "Storage Account" option a JSON file is created with records like below
{
"time": "2018-06-05T08:50:04.2266987Z",
"systemId": "XXXXXXXX-XXXX-XXXX-XXXX-d81b04ac33df",
"category": "LoadBalancerProbeHealthStatus",
"resourceId": "/SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXX/RESOURCEGROUPS/TEST-INT/PROVIDERS/MICROSOFT.NETWORK/LOADBALANCERS/TEST-LB",
"operationName": "LoadBalancerProbeHealthStatus",
"properties": {"publicIpAddress":"XXX.XXX.XXX.XXX","port":8080,"totalDipCount":2,"dipDownCount":0,"healthPercentage":100.000000}
}
,
{
"time": "2018-06-05T08:50:09.2415410Z",
"systemId": "XXXXXXXX-XXXX-XXXX-XXXX-d81b04ac33df",
"category": "LoadBalancerProbeHealthStatus",
"resourceId": "/SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXX/RESOURCEGROUPS/TEST-INT/PROVIDERS/MICROSOFT.NETWORK/LOADBALANCERS/TEST-LB",
"operationName": "LoadBalancerProbeHealthStatus",
"properties": {"publicIpAddress":"XXX.XXX.XXX.XXX","port":8080,"totalDipCount":2,"dipDownCount":1,"healthPercentage":50.000000}
}
"Log Analytics" suggested by Eric is not mandatory but can be used to analyze these LB logs.
There's an easy solution for this now; not sure when it was added to Azure, but here you go:
Click on the Load Balancer from within the Azure portal
Under Monitoring, click on Insights.
You should see something like this:
Hopefully your health checks will look healthier than the ones in this image!
you could use 'Log Analytics' to see current status for health probe. Below has more details and step-by-step.
https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-monitor-log
you can check https://learn.microsoft.com/en-us/rest/api/load-balancer/loadbalancerprobes, click on GET and then Try it.
It will need you to login with your Azure credentials and put the LB name, RG and Probe on LB which you want to check.
fill the details and it will give you the response code if the probes are healthy or not.
Similarly you can use https://learn.microsoft.com/en-us/rest/api/load-balancer/loadbalancers/get to get all the details of a particular LB.