I have a Use Case i am working on.
An Alert is fired in Azure when some conditions are met:
Condition: Whenever the total task complete events is greater than 0
The alert rule has some basic information.
Subscription
Resource Type
Resource Group
There is a Custom properties section and i want to use this to enrich the alert rule.The information I am hoping to include is in a table called AzureDiagnostics. Field values,conditions i need as follows (i included some custom fields):
OperationName=="TaskCompleteEvent" jobId_s id_s == "analyse" ElapsedTime=datetime_diff('second', executionInfo_endTime_t, executionInfo_startTime_t) ElapsedTime_in_Hours_Minutes_Seconds=ElapsedTime * 1s TimeGenerated
Can someone guide me on the best way to include these as custom properties? Adds value to the alert instead of then going back into azure to find more information. Any help is appreciated
I don't really have enough information to go on to properly answer this however, if you are referring to a default rule that you cannot edit, you can easily create an automation rule to autoclose the alert, you use custom KQL to reference the SecurityAlert table to enrich, such as
SecurityAlert | where DisplayName contains "previousRuleName"
or
If it is already custom KQL you can simply add, whether that's to the same table by removing the already specified column or otherwise, by using the join ( operator
Related
I have KQL giving me counts of my alert by severity the only issue is when the user closes them (i.e updates the user response) no column in the alerts table is updated
So here is the azure triggered view
but the alerts table has nothing
This strikes me as a fairly normal ask
I am making the following assumption that you have a custom KQL query for Azure Resource Graph Explorer to identify Azure Monitor alerts.
Properties, such as alertState and monitorCondition are not standalone columns, but are nested properties within the dynamically typed "properties" column. As this is querying Azure Resource Graph, the records are updated directly, rather than adding a new log (as it would be in log analytics).
Below is a query that extracts the two relevant properties.
alertsmanagementresources
| extend alertState = tostring(parse_json(properties.essentials.alertState))
| extend monitorCondition = tostring(parse_json(properties.essentials.monitorCondition))
| project name, alertState, monitorCondition
If you need help, please share your query and what information you are looking to query.
Alistair
I am new to sailpoint IdentityIQ.
How to find the connectors that filter out read-only entitlements during aggregation and certification please?
Thanks for your help!
During group aggregation, you can use a rule to modify the entiries found, including to make them requestable or not, modify their names, or to exclude them from IdentityIQ. This rule is attached to the group aggregation task.
You can refer to this article in SailPoint Community:
https://community.sailpoint.com/t5/Technical-White-Papers/Group-Aggregation-Data-Flow/ta-p/79070
Basically, in your group aggregation task, there is a dropdown to select/create a rule. You can create a new rule to do the logic you want. IdentityIQ will invoke your rule once per group object found, and if you return null, the group will be ignored. Or you can modify the object (change its name or description for example) and return it.
You can see the parameters IdentityIQ provides in the rule editor interface. Those groups you do return in your rule, becomes "Entitlement" objects in IdentityIQ.
For certification, you can assign a rule to select what you want to certify. In the campaign settings, IdentityIQ only certifies entitlement objects only. When it finds a group that is not an entitlement, that group is called "Additional Entitlement", and there's a checkbox to include or exclude it in the certification.
So if you already took care of groups you don't want in your group aggregation rule, for certification you can simply set it to exclude additional entitlements.
I am trying to create a change monitor using terraform. To create a monitor that checks that overtime a count stays at 0 for example every day (the value will go up to one some times and get back to 0).
I found on the UI the capacity to create a change alert.
I cant seem to find a way to define the configuration for this type. Is terraform just supporting only a subset of the monitors? or does the query need to be change in some specific way that I cant find documentation for?.
I've stumbled upon this as well. I just figured out you have to manually create the monitor using "change alerts" then go to "manage monitors" page, open the one you just created and you'll see the query that starts with change(...). Copy the whole query to the query field in your terraform config.
A useful feature of application monitoring services is sending alerts (e.g. emails) each time a new, unique error/problem/exception occurs (i.e., not for each occurrence). Either only the very first time, or at most once per X time (a day or week or such). This is, for example, possible with Visual Studio App Center. Unfortunately I haven't been able to find any such feature in Application Insights.
For clarification, a "new, unique error/problem/exception" can be thought of as a specific log statement in the code. I'm using Serilog, so all logged traces/exceptions have a MessageTemplate property which may help. But ideally the "problem ID" would be based on the code location, too (since multiple log statements may use the same message template).
The best lead I have found is the ability to send alerts based on a custom analytics query, but I'm not sure if it's possible to write a query that can give a behaviour similar to (if not exactly like) to what I describe above.
Is something similar to the behaviour I describe above possible to achieve with Application Insights? If it's possible through a custom query, how might such a query look?
Just through UI of azure portal, it's hard or impossible to achieve your first requirement: alert only the very first time. But you can try to use app insights rest api to fetch the data, then use code to implement your logic.
There is a similar solution(not exactly like you describe) for alert once per X time. The steps are as below:
1.Nav to azure portal -> you application insights -> Alerts -> new alert rule -> in the Condition, click Add button -> then select "Custom Log Search"
2.In the "search query" textbox, write your query like below:
exceptions
| where xxxx
Note that in the where clause, use some properties to identify the unique error.
3.Then in the "Alert logic", use the following settings:
Based on: Number of results, Operator: Greater than, Threshold value: 0
4.In the "Evaluated based on", set proper value for Period(max value is 2880 minutes) / Frequency(max value is 1440 minutes).
So if you want to trigger alert 1 time per day, you can set Period to 1440 minutes, set Frequency to 1440 minutes. But you also need to note that, if in the next day, there is no such specified error, it will not trigger in the next day.
I have a new Windows Application that I am adding Application Insights to. Adding a new chart gives the ability to Group on specific custom properties using a drop down. This drop down has 65 properties that AI must have added at some point. There were not specifically added.
We have a main AppInsights that takes all events. We've also created a AppInsight for development. The list of custom properties in the drop down is different between these two, even though the source code is the same.
It makes me suspect that there is some process that creates the drop down contents based on the incoming data.
The problem here is that the code has changed, and some properties are no longer available. We want to eliminate these values from the drop down, and add the new ones.
I am perfectly happy just deleting the entire list. Is there a way to do this?
The items that are available in the group by are properties that have ever been received by the back end in data you've sent, and aren't editable.
for custom properties/metrics, there's a limit on how many properties the backend will allow before it stops collecting new named custom properties. Conceptually, think of it as the backend storing an array of 200 elements for each telemetry item you sent, and mapping each custom property name to an index, and that mapping lasts forever. (i believe at the current time that limit is 200 each, but we're working on expanding that)
so if developers did things in your dev portal, even sent one item with custom property "foo", then that property will be there forever, and takes up one of those 200 slots. They can't be deleted or cleared at the moment.
Also, the contents of the group by box is also limited to events that have sent less than some threshold of distinct values, too. (I'm not sure on that exact value, but i believe it < 100 distinct values.) So fields like Id fields, or guids, etc, will eventually stop showing up as group by options, because the group by would create N distinct buckets of 1 item.
It seems like this would be something already mentioned in the App Insights UserVoice site, or documented in the azure documentation for group by but i'm not seeing it.
The only real workaround at this time is to create a new application insights resource in azure, and start submitting data to that new resource instead of your old one. And then you have to be proactive about never submitting custom properties that you're never going to use, or mixing case, as "Property1" and "property1" will be distinct properties...
If this is a big issue for you, i'd suggest submitting it to microsoft connect as a bug, or entering a uservoice suggestion above. I'll pass this on as something that really needs to be documented in the group by thing in the azure docs, too.