I want to monitor their on-premise AD infrastructure with Azure Monitor and want to monitor and generate reports on these metrics
a. Details of disabled users currently in in AD
b. Users with password not required
c. Users with password never expiry
d. Users with “Kerberos pre-authentication disabled”
e. Users not logged on for last 90 days
f. Stale computers/Computers inactive for last 90 days
g. Objects trusted to authenticate for delegation
h. Uptime of AD infra (average uptime of all domain controllers)
Does Active Directory health check solution on Log Analytics meets all these expectations?
I don't think the AD Health check solution (which is more of an AD assessment) can help with all of your above tasks.
From what I know, this solution consists of "several health check assessments which run on regular intervals in domain controllers to identify issues and risks and provide recommendations to improve the existing infrastructure up to Microsoft best practices.", as described in this community article.
Possible solution:
If you know where to get all this information from Event Viewer or other log file on the system, you can use the Azure Monitoring Agent or the Log Analytics agent (MMA) to monitor specific events or specific log records/lines from various custom logs on the server.
AMA Agent = https://learn.microsoft.com/en-us/azure/azure-monitor/agents/azure-monitor-agent-overview?tabs=PowerShellWindows
Data collection for specific events = https://learn.microsoft.com/en-us/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent
If you prefer the classic MMA agent (which will be replaced by the AMA agent above in about 2 years), you can look here:
MMA Agent = https://learn.microsoft.com/en-us/azure/azure-monitor/agents/log-analytics-agent
Custom Log collection = https://learn.microsoft.com/en-us/azure/azure-monitor/agents/data-sources-custom-logs
Related
There's an awful lot of disjointed documentation on monitoring network/resources in Azure. What I'm looking for is which pieces are needed to get information from VMs, NVA firewalls, azure load balancers, and other network resources and network connectivity into a single pain of glass in Azure. Only concerned about Azure, not on-prem for now.
I've come across azure monitor, log analytics work spaces, event hub, vm extensions, network watcher, insights, etc...but I'm not sure which are required and which are not. One doc leads to the next and I end up with 30 tabs open. I'll also need to be able to push logs to other security devices such as a SIEM.
Does anyone know of a deployment guide that wraps this all up in a more logical fashion? Does anyone have any feedback on which pieces from azure (not 3rd parties) are required at a minimum to accomplish a single pane of glass to view my Azure environment holistically?
General overview of observability in Azure
Likely, the thing you're looking for is Azure Monitor. It's an umbrella term for everything observability related inside Azure.
To store Metrics and Logs you need Log Analytics: it can query data with kusto query language, visualize results, define Alerts on queries.
Alerts is quite a complex beast, as it is spread across the entire cloud. Two types that I use the most:
log-analytics alert (which I mentioned above)
Alerts tab, which is available at every Azure component view. for example, open resource group, and scroll down to Monitoring section
Each component also has a subset of built-in metrics. Likely, you noticed that many azure components on the Overview view display some charts. For example, Azure Storage Account displays Total egress, Total ingress, and other line-charts. When you click on these charts you can customize them. These metrics and charts are free to use.
Microsoft also has all-in-one observability solution for Azure Functions and Web Apps: Application Insights
Dashboards allows to join multiple charts into a single view and share it with others.
If you care about security, Azure proposes Azure Security Center
Deployment/management strategy
I suggest to start with:
Create Log Analytics Workspace, which is the storage for metrics and logs. The azure docs article explains how to design it: how many instances to use, how to rate-limit ingestion (it might be expensive if goes out of control), how to access it and so on.
To get Azure components logs, look for Diagnostic Settings tab at a component page at Azure portal, but not all components has it (sic!). I suggest
sending the most critical data to Log Analytics workspace to store them in a queryable format for 30 days (it's in free tier). This is needed for investigating current issues with your infrastructure
if you might need logs later than 30 days - send them to Storage Account
you mentioned SIEM integration - route required events to Event Hub and then process the stream according to your requirements
So, if you need long-term storage - you need to create Azure Storage Account.
If you need real-time analysis - you need to build a pipeline based on Azure Event Hub.
If you have Azure Functions and Web Apps - add Application Insights. According to my experience, I would suggest starting with a separate instance per each Azure Function resource or Service.
Create Alerts for each component separately. If you do it through UI - open component page at the portal and look for Alerts tab there. If you're automating the process (please do so as soon as possible), do not expect easy trip: I used ARM templates and terraform - in both cases, there are dozens of barely documented features.
Join related components core-metrics into Dashboards and share it with the team. This guide is a good starting point. Note, when you share the dashboard, it's also persisted as an azure resource in the subscription.
I have below two questions can someone help on them.
1.Is there a script or a way to create custom alert format for azure alerts?
2.Is there a way to pin all the azure VM status to dashboard?
Regarding #1, the feature to customize or configure alert email format is currently not supported. If interested, I suggest you to raise your feedback / feature request here in UserVoice / feedback forum. Responsible product / feature team would triage / start checking feasibility and would prioritize the feedback.
Regarding #2, If 'status' is meant as 'PowerState' (i.e., status of VM whether it is running, deallocated, etc.) or if it's meant as 'StatusCode' (i.e., ok, etc.) or if it's meant as 'ProvisioningState' (i.e., succeeded, etc.) then I don't think we have straight-forward way for it so that we can ingest that particular data directly to dashboard but said that, you may just leverage 'Heartbeat' Log Analytics Kusto table at first place and create a custom view as dashboard using view designer but as views in Azure Monitor are being phased out and replaced with workbooks so I suggest leveraging these workbooks now.
If not, you may leverage a new feature called as Azure Monitor for VMs which basically helps to analyze the performance and health of your Windows and Linux VMs, and monitor their processes and dependencies on other resources and external processes. Here again, you can create interactive reports Azure Monitor for VMs with VM insights workbooks.
Hope these inputs helps!
We run a software application on azure for one of our customers. The customer want to see the performance of the systems. This consist of two parts. One is the metric information of the servers and they also want to see some information I want to provide by custom logging.
My plan is to give the customer access to the portal and only allow him access to the metric information and the custom tables.
It seems to me that by assigning a role to the customer I should be able to block all the other possibilities.
Does someone can me tell which actions I have to allow/forbid to achieve this? Or were I can find the information for this?
Solution #1
Instead of giving Read access to the virtual machine which may breaks security policy, I'd recommend to go with Azure Log Analytics (ref: https://learn.microsoft.com/en-us/azure/log-analytics/log-analytics-overview
) workspace. That said, you will need to create a workspace which collects and stores server metrics (ref: https://learn.microsoft.com/en-us/azure/log-analytics/log-analytics-quick-collect-windows-computer) and other custom metrics.
Your customer will be given access to the workspace only which he can see all metrics in a dashboard. If there is a need for log filtering, you can use Log Analytics query language (ref: https://learn.microsoft.com/en-us/azure/log-analytics/log-analytics-log-search-transition)
Log Analytics is a paid service. You are given free up to 10 workspaces per subscription. The workspace is considered an Azure resource so the limit follows by subscription limit, which means you can create up to 800 workspaces per a resource group. A subscription can allow 800 * 800 (for reference if you would like to do capacity planning for your workspace-based solution). For Log Analytics pricing, read here (https://azure.microsoft.com/en-us/pricing/details/log-analytics/).
Log Analytics is a good choice as its value proportion is to offer your customer intuitive dashboard to monitor their virtual machine performance, and to offer Near Real Time monitoring. And this solution is a cloud native compatibility.
There is a management solution which offers a bundle of VM capacity and performance monitoring which you can try now https://learn.microsoft.com/en-us/azure/log-analytics/log-analytics-capacity
Solution #2
Log Analytics might not be your choice because it might add more Azure service and operational cost. If you need a cheaper cost, you would need to collect your virtual machine by Performance Counter which is a built-in feature in Windows OS. With Performance Counter you can export to Excel file, or visualize into Power BI or some custom chart.
Other Solutions
You can utilize Azure Monitor and API to get data, For example, this API https://learn.microsoft.com/en-us/rest/api/monitor/metricdefinitions/list. You would certainly need to visualize or format in some intuitive way to satisfy your customer. It can be a custom front-end web, or Power BI or even Excel with chart.
You can just query to Azure Blob Storage and use Stream Analytics combining with Power BI to visualize your data (https://thuansoldier.net/?p=7187).
There is not a single solution. This really depends on your existing resource capacity, financial stuff or so on.
In my company we have one Azure subscription and there are two or three users which are added on the same subscription and have right to create any resource on Azure.
Now since three users are working on same subscription and they are independently creating resources, I want to keep track or see which user created what resource on the same subscription.
Please let me know is there any way to see this tracking/activity details corresponding to the user.
Currently all users have administrator role/permission.
You are looking for the Activity Log:
The Azure Activity Log is a log that provides insight into the
operations that were performed on resources in your subscription
The Activity Logs provides customers a Portal and REST API experience to see who performed what management operations (PUT/DELET/POST) through Azure Resource Manager (ARM) for the past 90 days.
For anything older than 90 days, you have the option to archive the data to storage account or stream the data to Event Hub if you would like to ingest this data into your own system.
The Activity Log data is also available through the Operations Management Suite.
http://www.deployazure.com/management/operations-management-suite/azure-activity-log-analytics-alerts-with-operations-management-suite/
A customer has asked we start tracking user and administration actions on our website for security purposes.
e.g. AdminUser {id: 3} impersonated user {id: 5} on 2015-08-04T12:00:00
The thought was we would publish that data using the Application Insights SDK. Our customer would then be able to monitor these events through the Azure portal.
Given the data retention policies of Application Insights, we would enable Continuous Export of data to table storage if a forensic analysis needed to be conducted past the 30 day cut-off.
Are there any obvious red flags to using Application Insights to provide visibility into these security details?
No, that should just work.
Here is how you create events: https://azure.microsoft.com/en-us/documentation/articles/app-insights-api-custom-events-metrics/
Note that if you also want to add custom properties to events there is a limit on number of unique property names per application. Currently it is 200 but that may decrease in the future.