I am trying to create a change monitor using terraform. To create a monitor that checks that overtime a count stays at 0 for example every day (the value will go up to one some times and get back to 0).
I found on the UI the capacity to create a change alert.
I cant seem to find a way to define the configuration for this type. Is terraform just supporting only a subset of the monitors? or does the query need to be change in some specific way that I cant find documentation for?.
I've stumbled upon this as well. I just figured out you have to manually create the monitor using "change alerts" then go to "manage monitors" page, open the one you just created and you'll see the query that starts with change(...). Copy the whole query to the query field in your terraform config.
Related
I have ~20 Services that I want to monitor differently so for example I want the monitor to alert me if SerivceA is over 1 second but ServiceB is over 3 seconds. I currently have a list of services text file that is setup like
ServiceName,Threshold
For example:
ServiceA,1
ServiceB,3
(For context eventualy I want other tools to access this list of services so I kind of just want central list to maintain for all the tools)
I use the for_each loop in terraform to access each String(ServiceA,1)
Then use ${tolist(split(",", "${each.key}"))[0]} -> Name(ServiceA)
or ${tolist(split(",", "${each.key}"))[1]} -> Threshold(1)
In my datadog dashboard it creates and seperates the name from the threshold fine in the SLO. But when I want to create a monitor for this SLO I use:
query = "error_budget("${datadog_service_level_objective.latency_slo["${tolist(split(",", "${each.key}"))[0]}"].id}").over("7d") > 100"
But I am getting an error like this: Error Message
The ".id" Worked before and currently is working for the Availability monitor that is using a text file with just the names of the services. So no ",2" in the text file.
So I want to be able to loop through this list and have it create custom monitors based on the metadata I put in the text file. My end goal is to have multiple points of data to get really granualar for over 100 services eventualy.. I do not want to do this manually
I have tried creating a variable for the list of services but I need to loop through the list inside the resouce with meta data. I really do not see how having a seperate list for just the meta data would even work. I would love and apprieciate any feedback or advice. Thank you in advance
I am loading data via pipelines in ADLS gen2 container.
Now I want to create a table that has details that when the pipeline start running and then completed. like below fields
where
startts - start time of job
endts- end time of job
but extractts is the extraction time which is what i want to create.. is there any approch i can create table ?? help will be really appreciated.
It might not be exactly what you need, but you can add columns inside a copy activity with the property #pipeline().TriggerTime to get your "startts" field. Go to the bottom option on the source tab and select new, and on value choose "Add dynamic content".
Here is an example
For your "extractts" you could use the property of the activity called "executionDuration", which will give you the time it took to adf to run the activity in seconds. Again, use dynamic content and get this: #activity('ReadFile').output.executionDuration, then store this value wherever you need.
In this example I'm storing the value in a variable, but you could use it anywhere.
I dont understand your need for a field "endts", I would just simply do startts+extractts to get it.
Longtime member, been a while since posting. Working on building out an Extranet and am running into a stupidly frustrating issue. First time using SharePoint Online as document repository for external (anonymous) users. In doing so, using Azure permissoning, have the documents split up in repositories on SharePoint based on access level. On top of that I am attempting to display them in Highlighted Content Web part, I am not able to sort them out by location AND type. I have a custom column in each repository that defines what type they are, but when I try to add the AND portion to the KQL it doesn't work. Additionally the internet seems to be massively void of actual documentation of KQL.
(
path:https://domain.sharepoint.com/sites/example/Level%201%20Resources/
OR
path:https://domain.sharepoint.com/sites/example/Level%202%20Resources/
OR
path:https://domain.sharepoint.com/sites/example/Level%203%20Resources/
OR
path:https://domain.sharepoint.com/sites/example/Level%204%20Resources/
OR
path:https://domain.sharepoint.com/sites/example/Level%205%20Resources/
OR
path:https://domain.sharepoint.com/sites/example/Level%206%20Resources/
AND
DocType:"Articles"
)
The above will simply pull all documents from those locations and ignore the AND statement. I have tried renaming it to call on the custom column identifier pulled from the source, and that doesn't work either.
The only real documentation I can find on this is: Here
Which doesn't appear to address filtering based on custom column tags.
EDIT: Reformatted to pull all docs from multiple locations using below, but the nesting portion still isn't working
path:(
"https://domain.sharepoint.com/sites/example/Level%201%20Resources/"
OR
"https://domain.sharepoint.com/sites/example/Level%202%20Resources/"
OR
"https://domain.sharepoint.com/sites/example/Level%203%20Resources/"
OR
"https://domain.sharepoint.com/sites/example/Level%204%20Resources/"
OR
"https://domain.sharepoint.com/sites/example/Level%205%20Resources/"
OR
"https://domain.sharepoint.com/sites/example/Level%206%20Resources/"
)
So the additional issue I was running into was the creation of a column to separate out based on the category of file type (not literal file type). Apparently SPO doesn't like it when you create a list and then reference that list to then filter by via KQL. So I found this morning this.
Apparently the best way to do this is create a custom "Choice" column, allow some time for it to flow and update, and then you can reference it via KQL.
Is there a way to find out the list of Azure Devops work items that were changed in a given period of time ?
Something like "The list of test case work items that were changed in the last 60 days". The change can include changes to any of the fields configured for the work item.
Use case: Today, we have the manual test cases and they are being automated. If the manual test case changes, we need to update the automated test as well.
So, I'm looking for a way to find out the list of work items that were changed in any way in a given time period.
In Azure DevOps, go over to your project and under Boards you have a Queries tab.
You can create a query there using the Work Item Type [Any] and Changed Date > #StartDate("-60d").
I've added an image if it is easier.
There is also API available for this, and you can automate that however you prefer.
I have a new Windows Application that I am adding Application Insights to. Adding a new chart gives the ability to Group on specific custom properties using a drop down. This drop down has 65 properties that AI must have added at some point. There were not specifically added.
We have a main AppInsights that takes all events. We've also created a AppInsight for development. The list of custom properties in the drop down is different between these two, even though the source code is the same.
It makes me suspect that there is some process that creates the drop down contents based on the incoming data.
The problem here is that the code has changed, and some properties are no longer available. We want to eliminate these values from the drop down, and add the new ones.
I am perfectly happy just deleting the entire list. Is there a way to do this?
The items that are available in the group by are properties that have ever been received by the back end in data you've sent, and aren't editable.
for custom properties/metrics, there's a limit on how many properties the backend will allow before it stops collecting new named custom properties. Conceptually, think of it as the backend storing an array of 200 elements for each telemetry item you sent, and mapping each custom property name to an index, and that mapping lasts forever. (i believe at the current time that limit is 200 each, but we're working on expanding that)
so if developers did things in your dev portal, even sent one item with custom property "foo", then that property will be there forever, and takes up one of those 200 slots. They can't be deleted or cleared at the moment.
Also, the contents of the group by box is also limited to events that have sent less than some threshold of distinct values, too. (I'm not sure on that exact value, but i believe it < 100 distinct values.) So fields like Id fields, or guids, etc, will eventually stop showing up as group by options, because the group by would create N distinct buckets of 1 item.
It seems like this would be something already mentioned in the App Insights UserVoice site, or documented in the azure documentation for group by but i'm not seeing it.
The only real workaround at this time is to create a new application insights resource in azure, and start submitting data to that new resource instead of your old one. And then you have to be proactive about never submitting custom properties that you're never going to use, or mixing case, as "Property1" and "property1" will be distinct properties...
If this is a big issue for you, i'd suggest submitting it to microsoft connect as a bug, or entering a uservoice suggestion above. I'll pass this on as something that really needs to be documented in the group by thing in the azure docs, too.