In Azure Application Insights Logs, I can save custom queries into the "query explorer", but they are saved per Application Insights instance. I want to use the same query in a different AI Logs.
Note I don't want to query data across AI instances, just save and reuse the queries themselves without duplicating them.
Unfortunately, currently the saved query can only be used by the current Application Insights instance. A user feedback is already raised here.
You can consider using workbook(Note that workbook is not designed for this and has some limitations, but we can use it to save query and use the query in other Application insights instances).
Steps are as below:
1.Nav to azure portal -> one of your application insights -> click Workbooks -> create an empty template:
2.Click Add -> then click "add query":
3.In the new page, you can select one of your Application Insights from the Resource dropdown -> then write your query code -> then click the "Run Query" button to check the results-> click Save button to save the workbook:
4.Next time, if you want to re-use the query written in step 3, just open the saved workbook -> click the Edit button to enter into edit mode:
5.Then click the Edit button in edit mode:
6.In the new page, click the Change button to select another Application Insights. Then the query will be based the new selected Application insights:
Related
Hi I'm using azure SQL database and I need to create a notification/alert once the daily growth of the database is over a pre-defined number. As an example I need to send an email to the DB admins once the database has grown over 1 GB within the last 24 hours. I was seeking for solutions but couldn't find a straight forward solution to be implemented using azure. Any help will be appreciated.
You can create alerts for SQL Db using alerts and action group in Azure. Below are steps you can follow to create alerts for SQL DB usage for a period of time,
Create a logic app as shown below,
In send email action, configure recipients mail addresses to notify alerts.
Next create an action group and configure the created logic app in actions tab.
Creating action group,
Once logic app selected, click on review + create.
Now you can create alert for sql db and select the created action group.
In Conditions tab, select signal as Data space used.
As per your requirement, configure details as shown below,
In Actions tab, select already created action group.
Once it is done, click on review + create.
This flow will execute whenever data used is more that 1 Gb for a selected time period.
Essentially, I'm looking for a way to pull analytic data on how frequently any static resource is accessed/downloaded. (I'm thinking Word documents, PDFs, audio files, video files)
Right now, the files are on a VM behind nginx, so the team can programmatically analyze the access logs.
We'd like to migrate this website to an Azure python WebApp, and it seems smart to put the static files into blob storage. I just can't find a way to get the information we need.
On top of that, it seems Azure doesn't have Application Insights for their Linux WebApps, which is their recommended way of hosting python.
Anyone know of a way we can achieve this?
Update: There are 2 ways to get the data:
1.Nav to azure portal -> Monitoring -> Metrics. Then click "Add metric" -> for "metric namespace", select "blob"; for "metric", select "Transactions"; for "aggregation", select "sum". Screenshot as below:
Then you need to add a filter. Click the "Add filter" button -> for "property", select "API name"; for "Values", select "GetBlob". Then you can see the total requests number to blobs. A screenshot as below:
2.Another way, you can log all the requests, then check the logs.
Nav to azure portal -> Monitoring(classic) -> Diagnostic settings(classic) -> then in the "blob properties", select some values, then click save button. Screenshot as below:
Note that all the logs are stored in the $log container in blob storage, but you should use storage explorer to see this $log container(it's not displayed in azure portal). Then you can see all the requests to the blobs. Screenshot as below:
Ofcourse you can. Check this article https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-static-website-how-to?tabs=azure-portal#metrics for guidance.
The metrics section can help you with that. You can also include application insights telemetry from the javascript side of your website. Check documentation here: https://learn.microsoft.com/en-us/azure/azure-monitor/app/javascript
I'm trying to create an alert for an App Insights custom metric on Azure.
e.g. alert if the "My Metric" metric is greater than 40 for 5 minutes.
According to Custom metrics in Azure Monitor this should be possible.
After they're published to Azure Monitor, you can browse, query, and alert on custom metrics for your Azure resources and applications side by side with the standard metrics emitted by Azure.
I created the metric with this code using the App Insights Python SDK (see Usage).
from applicationinsights import TelemetryClient
tc = TelemetryClient('<YOUR INSTRUMENTATION KEY GOES HERE>')
tc.track_metric('My Metric', 42)
tc.flush()
I can view the custom metric I created. It's the lone blue bar in the screenshot of the Metrics screen in the Azure portal.
However, when I click on the New rule alert button on that screen, I'm taken to the Create rule screen but it displays the following error.
Alerts configuration via Metrics not supported if selection includes multiple resources or more than two metric signals. Please modify your selection and try again or create the rule below. Please click to see the list of supported resources.
AFAIK, I'm only using one resource (the App Insights "Dev" resource) and one metric signal (the "My Metric" metric) as you can see from the screenshot.
Any ideas on what I've done wrong or what I'm missing and how I can correct it?
I'm pretty new to Azure so I'm also open to suggestions on others way of alerting on a custom metric.
Please follow the steps below:
Nav to azure portal -> Monitor -> Metrics -> Add metric. Note that in the dropdown box "METRIC NAMESPACE" -> select azure.applicationinsights under CUSTOM:
Then in the "METRIC" dropdown box, select your custom metric like "my metric", then click the "New alert rule":
In the "create rule" page, under CONDITION section, click the link in the screenshot below, then fill in necessary info and click Done button:
Another approach to this is to create Alerts based on Analytics query using Custom log search (see also Create, view, and manage log alerts using Azure Monitor) but I prefer the answer I accepted as it's significantly simpler.
I'm trying to create jobs in Azure Sql Database but I don't know how to do that. Is It possible do them inside de Sql Server Management Studio?
You need to use Azure Automation to schedule the execution of a stored procedure. For instance, You can use Azure Automation to schedule index maintenance tasks.
Below are steps :
Provision an Automation Account if you don’t have any, by going to https://portal.azure.com and select New > Management > Automation Account
After creating the Automation Account, open the details and now click on Runbooks > Browse Gallery
Type on the search box the word “indexes” and the runbook “Indexes tables in an Azure database if they have a high fragmentation” appears:
Note that the author of the runbook is the SC Automation Product Team at Microsoft. Click on Import:
After importing the runbook, now let’s add the database credentials to the assets. Click on Assets > Credentials and then on “Add a credential…” button.
Set a Credential name (that will be used later on the runbook), the database user name and password:
Now click again on Runbooks and then select the “Update-SQLIndexRunbook” from the list, and click on the “Edit…” button. You will be able to see the PowerShell script that will be executed:
If you want to test the script, just click on the “Test Pane” button, and the test window opens. Introduce the required parameters and click on Start to execute the index rebuild. If any error occurs, the error is logged on the results window. Note that depending on the database and the other parameters, this can take a long time to complete:
Now go back to the editor, and click on the “Publish” button enable the runbook. If we click on “Start”, a window appears asking for the parameters. But as we want to schedule this task, we will click on the “Schedule” button instead:
Click on the Schedule link to create a new Schedule for the runbook. I have specified once a week, but that will depend on your workload and how your indexes increase their fragmentation over time. You will need to tweak the schedule based on your needs and by executing the initial queries between executions:
Now introduce the parameters and run settings:
NOTE: you can play with having different schedules with different settings, i.e. having a specific schedule for a specific table.
With that, you have finished. Remember to change the Logging settings as desired:
You can use Microsoft Flow (https://flow.microsoft.com) in order to create a programmed flow with an SQL Server connector. Then in the connector you set the SQL Azure server, database name, username and password.
SQL Server connector
There are many options but the ones that you can use to run a T-SQL query daily are these:
SQL Connector options
Execute a SQL Query
Execute stored procedure
You can also edit your connection info in Data --> Connections menu.
is it possible to show the Podstats of AKS on a shared Dashboard?
Why not, you would just need to pull that data from the OMS and create a custom dashboard from those queries.
if you click on the individual entry it will navigate to the OMS instance and show you the query needed to get that data.
Danny, Unfortunately there is no simple "click to pin this chart" functionality available currently in Container Insights (more modern name for that thing is Azure Monitor for Containers). We're looking to add it within a couple of months.
The chart does a query to Log Analytics store and goes into a bit of custom processing on the data received to render the chart. You can go to your cluster, "Metrics" on the menu on the left and chart the same thing there. You can pin from the metrics. Let me know if you need help with that I can provide more detailed instructions...