Just landed a new position where I will be in charge of doing some system integrations and automations in regards to security. I have never done any integrations or automations so this is my first rodeo. I have the following tools at my disposal:
ZScaler
Azure Sentinel
Microsoft Cloud App Security
Microsoft Power Suite
Mcafee EPO
I have been given a list of action items to complete. A lot of them require responding to an incident as soon as it occurs, which is where I am lost. So for example, say Zscaler detects an IA infection and we want X and X actions to happen once detected. How do I ensure our systems are alerted immediately after the incident occurs? I am guessing this is a matter of querying the API, but what is the proper way of setting this up with the tools I have?
Normally you would send the logs of those security tools to Log Analytics, and could construct KQL queries based on them.
For example after having a custom log source for McAfee EPO, you could create a recurring query such as
McafeeEPO
| where EventType = ThreatEventLog
| extend HostCustomEntity = hostname_s, AccountCustomEntity = username_s, IPCustomEntity = ipv4_s
I used https://github.com/Azure/Azure-Sentinel/blob/master/Detections/EsetSMC/eset-threats.yaml as an example, you can check for others as well.
Related
We are using Application Insights by Azure. At the moment I have to manually check the exceptions after each deployment to see if a new one appeared. Has anyone figured out a way to get notified (via Azure alert) once a new exception appears? For example, other error trackers like Sentry support this.
Example:
We did a deployment at 15:15
A previously unknown exception appears at 15:17
An email is sent to me with content "New exception X appeared in project Y"
Here is a screenshot demonstrating this a bit more clearly:
Smart detections are being replaced by alets. The only way to get notifications is to write a query that will see your new exceptions. Configure the period to let alerts activate.
Navigate to the Application insights resource on Azure Portal.
Select Logs under the Monitoring blade.
Construct your log query and check the results.
Click on + New alert rule.
Configure your alert as follows:
The above alert fires whenever the count of results in Custom log search log query for the last 1 day is greater than 0, and is evaluated every 6 hours. You can customize the Period and frequency as needed.
You can also run through this detailed guide for troubleshooting problems with Azure Monitor alerts. Please check if this helps.
You can try Smart Detection, specifically the alert for abnormal rise in exception volume.
When would I get this type of smart detection notification?
You get this type of notification if your app is showing an abnormal rise in the number of exceptions of a specific type, during a day. This number is compared to a baseline calculated over the previous seven days. Machine learning algorithms are used for detecting the rise in exception count, while taking into account a natural growth in your application usage.
If you never got a specificy exception before a release, I would consider that a rise in exceptions for that type and you should get an alert. Though the alert won't happen if there are very few exceptions occuring, and it won't be as detailed as you described in your question.
PROBLEM
We want to track changes in user calendars, but are concerned with how often we'd need to check 2000+ user calendars (Outlook).
Would the process of monitoring over 2000 user calendars present a problem for our network?
WORKFLOW
Trigger (Check for calendar change) -> ACTION (Http: update a DB)
The Trigger below checks a calendar every 2 seconds. Is there a trigger that behaves like a "subscription" object where it simply handles a change notification?
For the question about how often to check the calendar events, it depends on your requirement. In my opinion, if you set check event every 2 seconds(it's a little bit more frequent), you'd better check if your logic app set as run in parallel. You can click the ... button of the trigger and click "Settings". Then check if it is same to below screenshot.
For your question about is there a trigger that behaves like a "subscription". I'm afraid it doesn't exist a trigger which can implement this requirement in logic app. We can also check if any backend api can implement it, we can see the graph api document.
The example in above screenshot is for mailFolders, but it's same with events. We can see it is necessary to specify a user(like me) or a group before the /events. So I don't think we can monitor the subscription events. You can raise a post on Azure feedback page to suggest developer add this feature.
There is an one year old question How can I retrieve through an API *Live Metrics* of Microsoft Application Insights about is it possible to pull LiveMetrics data that appInsights generate for the application trough some API
Right now i don't see anything live related in the official documentation - https://dev.applicationinsights.io/reference . And the answer for old question was also that there is no any way to get them.
But maybe someone knows if AppInsights team plans were changed in this year and they are working on that API?
It might be really useful to pull that data in realtime through API to own alerting\metrics system to process data from different microservices\applications and display them in aggregated way in realtime.
As example we can build something like OpServer has but based on different applications and their AppInsights data .
As right now there is no any way to get it
Note: I work in Application Insights team at Microsoft.
LiveMetrics data is not persistently stored anywhere, and there is no API to retrieve it. The data is collected only when someone is actively on the Live Metrics portal page. The moment browser window is closed, data is gone as well.
If your goal is to get metrics/other in real-time, then you can do that by implementing own ITelemetryProcessor. Most people use ITelemetryProcessor to "filter" out unwanted telemetry. But that is not a rule. All telemetry passes through TelemetryProcessor, and you can chose to filter the data or do something else with it. In your case you want to send it instantly to some real-time service. In fact, LiveMetrics (internally known as QuickPulse) is implemented as a TelemetryProcessor. (https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/WEB/Src/PerformanceCollector/Perf.Shared/QuickPulseTelemetryProcessor.cs#L158)
General doc about TelemetryProcessor:
https://learn.microsoft.com/en-us/azure/azure-monitor/app/api-filtering-sampling#create-a-telemetry-processor-c
I am currently trying to pull data from the Eventbrite API platform in Jupyter Labs. Sporadically, I am receiving a 406 Not Acceptable Error when I make the request. However, invariably, if I make the same request again a few minutes later the request pulls the data fine.
I've checked the usual things: ie that I have not gone over my request limits.
Here is the request I am currently making:
url = 'https://www.eventbriteapi.com/v3/events/search/?token=MY_TOKEN_HERE&location.latitude=42.34631505453378&location.longitude=-71.04174243961083&location.within=3km&start_date.range_start=2019-10-30T00:00:00Z&start_date.range_end=2019-11-30T00:00:00Z&expand=venue'
x = requests.get(url)
x
And the response:
<Response [406]>
Any thoughts on what the problem might be?
Yeah, I got an email from them a little over a week ago saying:
Hello,
We're reaching out today to follow up regarding the events/search/ endpoint. Thank you for your patience while we worked to reach a conclusion to the issue.
Access to the Eventbrite Event Search API (GET /v3/events/search/) will be shut down at 11:59 pm PT on Thursday, December 12, 2019.
We strongly encourage you to find and remove any code that makes requests to this Event Search API from your applications in advance.
Why is this happening?
We’re removing the Event Search API to further improve the Eventbrite platform and allow us to support more Creators and their events. Allowing public access to this particular API was impacting our platform and the high level of service we strive to provide to our creators and their attendees. We are able to provide alternative access to retrieve your event data through our Event APIs (see below).
What is the replacement API?
To get Events via our API, please see:
• Retrieve an Event by ID — GET /v3/events/:event_id/
• List Events by Venue — GET /v3/venues/:venue_id/events/
• List Events by Organization — GET /v3/organizations/:organization_id/events/
If you’re retrieving private events on behalf of another user, you can complete the app authorization flow. If you’re interested in retrieving public events on behalf of many Eventbrite creators, you can apply to our distribution partner program.
We apologize for the delay in communication regarding this decision, as well as for the inconvenience and frustration this change has caused
Regards,
Eventbrite Developer Support
So it looks like that's finally confirmed as dead at least.
FYI... this is an an ongoing Eventbrite API issue that is causing problems for many. See: https://groups.google.com/forum/#!topic/eventbrite-api/-E0MG7THMsc
Spring integration really helps us a lot during application integration, it make us more focus on flow design.
However, we want to log all file processing steps and use log analytic tools to check how one specific file(message) been processed.
Question is how to log a grouping id for each message in order to group them for checking by another logging anlytic tools?
thanks
Consider to turn on Message History for your application. This way in the end of flow you can extract such a MessageHistory.HEADER_NAME with all the traveling information for the message.
Otherwise you really don't have choice unless to add some business header in the beginning of the flow and parse logs for such a common key.