Azure Security Center missing updates - azure

Using Azure Security Center and I have most of my VMs showing an informational warning regarding their System Updates. When I go into them, they don't have any recent data. There is recent data for the OS Vulnerability column, so I know the connection is working, but this data isn't showing up.
What is the mechanism used to scan these for updates? Do I need Windows Update service to be started and Automatic, or anything like that? All my VMs are Windows 2012 or 2012R2, including the few that do appear to be working correctly.

What is the mechanism used to scan these for updates? Do I need
Windows Update service to be started and Automatic, or anything like
that?
Azure VM needs to update by enabling the Update in the VM or manually. This works just like how your local machine works.
Azure Security Center provides a quick view into the security posture of your Azure and non-Azure workloads, enabling you to discover and assess the security of your workloads and to identify and mitigate risk. It cannot updating your VM.

Related

Azure window server in-place upgrade but plan is still the same

In order to avoid doing some overhead work, I decide to use a work-around to upgrade my VM from server 2016 to 2019. The work around was successful and everything is running fine. One hiccup though is that I still see the plan being set to "2016-Datacenter".
(Correct me if I am wrong) So far doing some digging I see that this is set at the create time of the VM; it corresponds to the sku of the image used to build the VM.
My question is, are there any gotchas if the VM is running server 2019 but the plan is set to "2016-Datacenter"
Plan information is metadata Microsoft uses to track Marketplace offers. If you want to create an image in a shared gallery, using a source that was originally created from an Azure Marketplace image like this, you may need to keep track of purchase plan information. You may face issues when you create a VM from the Azure Marketplace image if there is wrong plan information. Read here for more details.
We are able to do an Azure VM in-place upgrade to Windows Server 2019. Here is the step by step process to update the IaaS VM Windows server to Windows Server 2019 for your reference.
However, it's not recommended to do because Microsoft does not support an upgrade of the operating system of an Azure VM.. It prefers to use a clean uninstallation and installation. To work around this issue, create an Azure VM that's running a supported version of an operating system, and then migrate the workload.

Azure WebApps - Are they patched by Microsoft?

If you have a Virtual Machine you are required to apply patches every Patch Tuesday and ensure the OS is up to date to prevent security issues.
If you get a PAAS Azure WebApp do Microsoft take care of patching the underlying OS?
If so would you see downtime when this happens? Or are all the apps on that Host OS moved to another Host in some way?
For the first question, that is kind of the point of PaaS. Azure takes care of the patches for the OS.
As for an answer to your other questions, this GitHub issue is quite good: https://github.com/Azure/app-service-announcements/issues/63.
Most updates can be performed without affecting your services running on the platform’s infrastructure. For this update, you’ll notice a restart of your web apps, the same that takes place during our regular monthly OS update. Our goal is to avoid service interruptions and, as with every upgrade to the service, we will be monitoring the health of the platform during the rollout.
Your apps are moved to another update domain transparently while the patch is applied to the update domain that hosted your app. It does cause an app restart of course.
Take a look a the blog we just published describing what goes on behind App Service updates - https://blogs.msdn.microsoft.com/appserviceteam/2018/01/18/demystifying-the-magic-behind-app-service-os-updates/

Azure security - Hardening of O/S builds, security standards?

This is a question for Azure experts, in particular around the Windows VM's available in Azure.
Do they make any changes to the base build? Hardening and security standards? Or are they standard builds fresh out the box?
Any information on this would be greatly appreciated.
yes. Public and up-to-date information about security measures like compliance, some technical details, etc, can be found on the Azure Trust Center.
However, i do not think that Microsoft reveals all of the internal implementation information, but a lot of work is doing around isolation of hypervisor, root os, guest vms. Also, there is the Azure Fabric Controller is the "brain" that secures and isolates customer deployments and manage the commands sent to Host OS/Hypervisor, and the Host OS is a configuration-hardened version of Windows Server.
Some basic information can be found here:
https://technet.microsoft.com/en-us/cloud/gg663906.aspx
Azure Fabric Controller: https://azure.microsoft.com/en-us/documentation/videos/fabric-controller-internals-building-and-updating-high-availability-apps/
And i recommend to follow Mark Russinovich, Azure CTO, as his video are one of the most internal-details-revealing i ever saw.
You might wanna check out the CIS hardened Images in the Azure Marketplace: https://www.cisecurity.org/cis-hardened-images-now-in-microsoft-azure-marketplace/
Ther you can choose between two levels of hardening, depending on your workload as well as there multiple Windows Server versiosn and even some Linuxs distrubutions. If you want to harden the VMs yourself, I would check out the Dev-Sec Project on github: https://github.com/dev-sec
There you can customize the hardening to your needs if you have an automation tool in place like chef, puppet etc.

Setting "Security Enabled Access" on SQL Database to "Required" in Azure Management Portal breaks Automated Export

I've been running an instance of SQL Azure for a while now and making use of the Automated Export feature to backup directly into Azure Storage.
I've recently switched over to use the Security Enabled Connection String-
{server}.database.secure.windows.net
-so I could make use of the auditing features in Azure too. I set my Security Enabled Access settings to Required to enforce that, as I don't want to miss out on the auditing.
However I've had no new backups in Azure Storage since I switched over. I've investigated into the issue but can't come to a solid conclusion of what's going wrong.
I'm still able to connect to the server and view the database in SQL Management Studio using the non-secure connection string-
{server}.database.windows.net
-but I can't see any tables in the database, which is good as that indicates that the secure connection is indeed required.
My gut feel is that the automated backup in Azure uses the non-secure connection string by default and hasn't picked up the Required Security Enabled Access setting.
The automated backup feature is still in preview mode so the setting may not be supported yet.
So the question is:
Does anyone have any links to official resources detailing this limitation and/or has also experienced the same problem and has a workaround?
The below issue has been fixed. Using SECURITY ENABLED ACCESS required and Import/Export works.
There is a known issue, which is getting fixed, when using
SECURITY ENABLED ACCESS required and import/export. There are a couple
of workarounds you can use to get around this.
You can use client tools, like SSMS, and login do your database as the server principal and use the built-in export data-tier application
functionality. To do this, login with the secure connection, right
click on the database you want to export, select tasks, and select
export data-tier application. Make sure you update SSMS with the
update to support V12 servers. You can find it here.
If you don't need to audit the import/export, you could also set SECURITY ENABLED ACCESS to optional.
I suggest using the first method because you still will audit the
operation.
The issue is now fixed. There should be no problem running automated / manual import / export while having auditing enabled.

Memory metrics missing from Azure dashboard

We've recently started using Azure to host some virtual machines, but I've got problems getting the grips on the available resource monitoring metrics.
When I go to the dashboard for the virtual machine, I have the option to add metrics for several things, but Memory Available is missing:
When reading about how to monitor cloud services, it seems clear that you should have the option to add a metrics for Memory Available. Reading other posts here on Stack Overflow, I see other tools such as MetricsHub mentioned - but I don't think this is what we want, as we don't need any monitoring endpoint, we only want to see memory usage in the Azure dashboard (and apps from the Azure store isn't available to us, since we're on an Enterprise Agreement).
Am I missing something obvious here? What must be done to add memory monitoring to the dashboard?
Cloud Services is not the same as Virtual Machines. When you use cloud services, Azure will provision VMs for you and Azure is able to install monitoring tools that see the amount of available memory. When you create your own VMs Azure can't and shouldn't do that. In other words, with VMs you are on your own. The metrics you do see in the portal are the ones that can be measured from outside the VM.
If you do deploy as a Cloud Service then initially you will only have the same metrics as for the VM. There are several ways you can change this.
The easiest is to go to the configuration for your cloud service in the Management Portal and change the logging level from Minimal to Verbose; That will enable a lot more metrics. Alternatively, you can specify which metrics you want collected in the cloud configuration in your project in Visual Studio. It is also possible to do this in code, though that is not the currently recommended practice, instead use the configuration tool in the cloud project in visual studio.
The key thing to understand about the metrics in Cloud Services is that, whichever way you elect to configure them, they are stored in a standard way in Table Storage and Blob Storage. That means using the Azure Management Portal or the tool in Visual Studio or code, the outcome is the same. This also means that a variety of tools including Cerebrata, Visual Studio and, indeed, the management portal can all read this data.
It is also worth noting that because of the way this works, the configuration can be changed at runtime, usually through the portal but there are other tools and approaches in code.
In my experience, you normally only want to sample your performance metrics every two minutes, but do the log shipping every minute. Also note that you can configure trace logs and IIS logs etc to be available to tools like Visual Studio and Cerebrata. For Cloud Services, it is quite rich functionality but it takes some working with it before you start to "get" it all. Enjoy!
You can monitor memory and other "Guest" level metrics in Azure, here's how:
in Azure, go to your virtual machine, scroll down the settings to Monitoring > Diagnostics Settings
Click to enable Guest level monitoring, it can take a few minutes
Then you can go into Metrics for the VM, or Monitor at the top level:
choose the resource (the VM)
choose Guest in the metric namespace, it will load all the new metrics
choose Memory\Committed bytes or whatever ones you want.
You can then pin to dashboard etc as you would normally
It should be possible to install azure diagnostics on VM using powershell command Set-AzureVMDiagnosticsExtension
http://azure.microsoft.com/en-us/documentation/articles/cloud-services-dotnet-diagnostics/#virtual-machine
or using new management portal
http://feedback.azure.com/forums/231545-diagnostics-and-monitoring/suggestions/5535368-provide-azure-diagnostics-runtime-for-vm-iaas
I've tried to configure it using new portal, I can see the the extension IaaSDiagnostics is installed on VM, but no luck yet with getting the data.

Resources