In Log analytics for network security groups, Microsoft describes how to enable "Counter logs" that keep track of how many times the security rules for NSGs are invoked.
I've followed the instructions in the article, enabling the NetworkSecurityGroupRuleCounter for my NSG, but I don't get any events. I am sure that my Inbound and Outbound rules are being invoked; I can successfully use them to block incoming and outgoing traffic for VMs in the group.
As you can see, the setting is enabled as shown in the article. Is there something else that's needed to make the Counter logs show up?
This turned out to be a software fault and not a configuration issue. I finally got an engineer at Microsoft to look at this problem. They restarted an agent on a host machine, which fixed the issue.
Have you tried choosing a different storage account to see if the logs are recorded?
How exactly are you analyzing the logs?
Is the Storage account created in Azure Resource Manager?
Check and make sure that the Storage account that you have chosen for the logs is created in Azure Resource manager.
Related
This question may sound a little odd, but here it goes: A customer of ours would like to get access to certain metrics of his environment of our product which we host on Azure for the customer. It's a pretty complicated deployment, but in the end it consists of an Application Gateway, some virtual machines and a dedicated Azure SQL database.
The customer now would want to get select metrics from this deployment forward to their own DataDog subscription, e.g. VM CPU metrics, database statistics and those things. DataDog obviously supports all this information (which is good), but as a default would slurp in information from all resources within our subscription (which is not OK).
Is there a way to fine-granularly define which data is forwarded to DataDog, e.g. the resources and also which type of metrics to forward for each resource? What are my options here? Is it enough to create a service principal with a limited reading right, or can I configure this somewhere else? I am unfortunately not familiar with DataDog.
The main thing which must be prevented is that the customer due to the metrics forwarding could get access to other metrics in our subscription - we need to control the exact scope of the metrics.
The pretty straightforward solution to this issue is to create a service principal via command line, and then to assign the monitoring role to this service principal only exactly for the resources you need. This even works down to a level of specific databases for example.
Kicker: This is not possible to do in such a granularity from the UI, but the az command line accepts assigning the monitoring reader permission on a deep resource ID level, even if the UI for this is not there. By finding the resource ID from the UI, and then using the resource ID from the command line, it's possible to achieve exactly this behaviour.
I want to monitor external systems using azure monitor. Is it possible?
For example, I have on-prem Linux server with mysql DB, can I monitor the server and its DB like availability, errors,...?
Firstly, you can use “Azure Monitor agent” that is explained here. Would recommend you to use Azure Monitor Log Analytics agent as instructed here. The reason for it is “Azure Monitor Agent” as informed in this section, currently only Azure VMs are supported and on-premises VMs, virtual machine scale sets, Arc for Servers, Azure Kubernetes Service, and other compute resource types are currently not supported.
Next, If you have “Azure Monitor Log Analytics agent on Windows machine” then you may have to check below things:
As explained here, Change Tracking and Inventory requires linking a Log Analytics workspace to your Automation account so I recommend you to double check it. For a definitive list of supported regions, see Azure Workspace mappings. The region mappings don't affect the ability to manage VMs in a separate region from your Automation account.
Follow this troubleshooting steps in your case (i.e., if you don't see any Change Tracking and Inventory results for Windows machines that have been enabled for the feature).
As mentioned here, note that currently Change Tracking and Inventory currently is experiencing the following issue w.r.t Windows environment: Hotfix updates aren't collected on Windows Server 2016 Core RS3 machines.
I am struggling to get my Azure batch nodes to start within a Pool that is configured to use a virtual network. The virtual network has been configured with a service endpoint policy that has a "Microsoft.Storage" policy definition and it points at a single storage account. Without the service endpoints defined on the virtual network the Azure batch pool works as expected, but with it the following error occurs and the node never starts.
I have tried creating the Batch account in both Pool allocation modes. This did not seem to make a difference, the pool resizes successfully and then the nodes are stuck in "Starting" mode. In the "User Subscription" mode I found the start-up error because I can see the VM instance in my account:
VM has reported a failure when processing extension 'batchNodeExtension'. Error message: "Enable failed: processing file downloads failed: failed to download file[0]: failed to download file: unexpected status code: actual=403 expected=200" More information on troubleshooting is available at https://aka.ms/VMExtensionCSELinuxTroubleshoot
From what I can determine this is an Azure VM extension that is running to configure the VM for Azure Batch. My base image is Canonical, ubuntuserver, 18.04-lts (batch.node.ubuntu 18.04). I can see that the extensions is attempting to download from:
https://a52a7f3c745c443e8c2cac69.blob.core.windows.net/nodeagentpackage-version9-22-0-2/Ubuntu-18.04/batch_init-ubuntu-18.04-1.8.7.tar.gz (note I removed the SAS token from this URL for posting here)
there are 8 further files that are downloaded and it looks like this is configuring the Batch agent on the node.
The 403 error indicates that the node cannot connect to this storage account, which makes sense given the service endpoint policy. It does not include this storage account within it and this storage account is external to my Azure subscription. I thought that I might be able to add it to the service endpoint policy, but I have no way of determining what Azure subscription it is part of it. If I knew this I thought I could add it like:
Endpoint policy allows you to add specific Azure Storage accounts to allow list, using the resourceID format. You can restrict access to all storage accounts in a subscription
E.g. /subscriptions/subscriptionId (from https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-service-endpoint-policies-overview)
I tried adding security group rules using service tags for Azure storage, but this did not help. The node still cannot connect and this makes sense given the description of service endpoint policies.
The reason for my interest in this is the following post:
[https://github.com/Azure/Batch/issues/66][1]
I am trying to minimise the bandwidth charges from my storage account by using service endpoints.
I have also tried to create my own VM, but I am not sure whether the "batchNodeExtension" script is run automatically for VMs that you're using with Batch.
I would really appreciate any pointers because I am running out of ideas to try!
Batch requires a generic rule for all of Storage (can be regional variant) as specified at https://learn.microsoft.com/en-us/azure/batch/batch-virtual-network#network-security-groups-specifying-subnet-level-rules. Currently it is mainly used to download our agent and maintain state/get information needed to run tasks.
I am facing the same problem with Azure Machine Learning. We are trying to fight data exfiltration by using the SP Policies in order to prevent sending the data to any non-subscription storage accounts.
Since Azure ML Computes depends on the Batch service, we were unable to run any ML compute if the SP policy is associated to the compute subnet.
Microsoft stated the follwoing:
Filtering traffic on Azure services deployed into Virtual Networks: At this time, Azure Service Endpoint Policies are not supported for any managed Azure services that are deployed into your virtual network.
https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-service-endpoint-policies-overview#scenarios
I understand from this kind of restriction, that any service that use Azure Batch (which almost all services in Azure?) cannot use the SP Policy which make it useless freature...
Finally we endup by removing the SP policy completly from our network architecture and considered it only for scenarios where you to want to restrict customers to access specific storage accounts.
I have several virtual machines and virtual machine scale sets in Azure for which I want to collect Windows Security event logs. I attempted to add these events to the Log Analytics workspace used by Sentinel through the portal.
This produces the following error message.
'Security' event log cannot be collected by this intelligence pack
because Audit Success and Audit Failure event types are not currently
supported.
It's a hard requirement for me that Sentinel has access these Security logs. I've been trying to figure out what my options are, and I haven't found a good one yet.
The prescribed approach appears to be setting up a Data Connector in Sentinel for the Security Events. I hit a couple of interesting things attempting this.
Virtual machine scale sets support is limited. No actions are
available at this moment.
It looks like I can't connect virtual machine scale sets, which is a big problem. Additionally, I can't even select the tier of the security events (see below) from this context.
So it looks like I have to use Azure Security Center. From within Azure Security Center the only way I can add these Security Events is to turn on Auto-Provisioning and install the Microsoft Monitoring agent (MMA) on every VM, something I don't want to do. I'm also concerned about costs using ASC.
Are there any other options? Am I going about this the wrong way?
The Security event log is automatically added behind the scenes when adding the monitoring agent on the VM.
In regards to the VMSS, I am not sure what your options are there.
can someone explaion to me the diffference between the NSG flow and the NSG diagnostics.
Should i enable both or just one for security investigation purposes?
Regards,
Kelly
Searched online couldnt find much information
NSG flow logs as the name suggests allows you to collect and build analytics on top of the ingress/egress IP packets which flows through your NSG (primary objective is to analyze network traffic). Note that flow logs can only be integrated with the storage account i.e.e the BLOB service (or ADLS) and no additional integration is available by default with Event Hub for now.
Diagnostics log as the name suggests are a higher-level abstraction of log entity i.e. they provide log details are tenant/resource group (or resources) scope. Note that the diagnostic logs can be streamed to Event Hub and in addition can also be integrated with services such as Azure Monitoring and Storage account (BLOB store or ADLS).