backing up and restore all resources in several resource groups - azure

Some time ago we had a vendor build a proof of concept in Azure for us. We don't have time at this point to analyze it, and it is incurring costs even though things are turned off. Is there a way for us to export and delete everything so that we could at some point easily import it and spin it back up?
We've got 6 resource groups and 76 resource which include app services, app service plans, application isignts, disk, dns zone, function apps, key vault, kubernetes services, load balencer, network interface, network security groups, network watcher, public ip addresses, rout table, service bus namespace, shared dahsboard, sql databases, sql servers, storage accounts, virtual machine, virtual neworks.
Some of those services incur cost even if they are turned off so we want to decommission them entirely, but hopefully easily rebuild if necessary.

Related

Does Azure Isolated App Service Plan provides dedicated (not shared) hardware?

I know that Azure Isolated App Service Plan provide several benefits like:
Network level isolation
More worker units
Better performance
but i wonder if the isolation is also on the hardware level or not ?
With Azure App Service Environment version 3 (ASEv3), you can deploy it on a dedicated host group. Host group deployments are not zone redundant for ASEv3.
Azure Dedicated Host is a service that provides physical servers - able to host one or more virtual machines - dedicated to one Azure subscription. Dedicated hosts are the same physical servers used in our data centers, provided as a resource. You can provision dedicated hosts within a region, availability zone, and fault domain. Then, you can place VMs directly into your provisioned hosts, in whatever configuration best meets your needs.
This is NOT available with ASEv2.
Reference:
https://learn.microsoft.com/en-us/azure/app-service/environment/overview
https://learn.microsoft.com/en-us/azure/virtual-machines/dedicated-hosts

What is the Azure Resource Manager equivalent of VIP Swap?

Azure classic Cloud Services come with a built-in load balancer that allows a fast VIP swap from production to staging, and vice versa. What equivalent is provided by Azure Resource Manager? I can use DNS, but then I have the TTL delay.
I want the fast swap because my back-end servers are stateful and cannot process the same data in both staging and production without overwriting each other. In my current system, out-of-date connections (e.g. because of HTTP keep-alive) are rejected and a reload is forced, forcing fresh connections.
I guess I might be able to do it using Azure Application Gateway, but it is not listed as one of its features.
You can do VIP swap in ARM with 2 Azure load balancers by disassociating the public IPs, then reassigning them. It's not a fast deployment slot swap like you can do with cloud services however, as can take a minute to disassociate both IP addresses (you could speed this up by doing it in parallel), and based on your question you've already looked at this approach, but documenting it here as an option. There are some notes on this approach here: https://msftstack.wordpress.com/2017/02/24/vip-swap-blue-green-deployment-in-azure-resource-manager/
In Azure resource manager, there are three ways, Azure Load Balancer(layer 4), Application Gateway(layer 7) and Traffic Manager(DNS level). I think you can use Load Balancer in you scenario.
The following table helps understanding the difference between Load Balancer and Application Gateway:

Adding existing Azure VMs (classic) to a virtual network

On Azure, I have a two-VM set (both classic), whereby my web application resides on one VM, my database on another. Both map to the same DNS and belong to the same Resource Group, but both are acting as standalone cloud services at the moment. Let me explain: currently the web application communicates with the database over the public DNS. However, I need them to be on the same lan, so I can de-risk security and improve latency.
I know for a fact that they're not part of a virtual network because when I try to affix a static private IP to my database VM, I'm shown the following prompt in the portal:
This virtual machine can't be configured with a static private IP
address because it's not deployed in a virtual network.
How should I proceed to fix this misconfiguration and what should my next concrete step be? The website is live, and I don't want to risk service interruption. Ideally, both VMs should be in the same virtual network, and should communicate with eachother via a static internal IP. Please forgive my ignorance. Any help would be greatly appreciated.
I guess i'll be the bearer of bad news. You have to delete both VMs while keeping the VHDs in the storage account, then recreate the VMs (reattaching the disks) in the Virtual Network.
Since these are Classic VMs you can use the old Portal when re-creating them. You'll find the VHDs under "My Disks" in the VM creation workflow.
Alternatively, just restrict the inbound access with an ACL on the database Endpoint. Allow the VIP of the first VM and deny everything else. This is good enough for almost any scenario, since if your Web Server gets compromised it's game over. It makes no difference how they exfiltrate stuff off your database (over a VNET or over VIP).
Here's the relevant documentation page for setting up Endpoint ACLs:
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-classic-setup-endpoints/

Domain Controller in Azure VM slow to respond

I have set up a simple domain in my Azure subscription by creating a domain controller in an Azure VM, with all of the associated DNS setup and following documented best practices. This is a cloud-only domain on a cloud-only vnet; there is no on-premises connectivity. I have provisioned and joined a handful of VMs to the domain. Now, when I provision new VMs, they have trouble joining the domain (often failing to join at all) and DNS lookups from these machines often times out, especially to internet addresses. How can I fix this?
Details
I have set up a domain controller on an Azure VM following the practices and steps in "Install a new Active Directory forest on an Azure virtual network" and "Guidelines for Deploying Windows Server Active Directory on Azure Virtual Machines", with the exception that I did not put the AD database on a separate data disk. In addition, I have added 168.63.129.16 as a second DNS address (the first address is the internal vnet address of the DC, which I have made static using Set-AzureStaticVNetIP) in the Virtual Network settings so that the machines on the domain can reach the internet.
I use the PowerShell cmdlets to provision new machines and have them automatically joined to the domain using the -WindowsDomain switch and associated parameters of Add-AzureProvisioningConfig when creating the VMs. I have provisioned the DC in one cloud service, and all other machines in another cloud service. All are on the same vnet subnet, and all of this is in one affinity group. I have provisioned and joined about 15 machines, about ten of which are still running (others deleted).
Usually provisioning a new VM takes about 11-12 minutes. Now I'm seeing that it takes upwards of 30-35, and upon completion, the machine failed to join the domain. DNS lookups across the board are slow and often time out (especially for internet addresses), and on these new machines that were not able to join the domain, often fail completely. Pinging the DC from these machines fails, while on machines that successfully joined the domain earlier, it succeeds.
I am not sure if the number of machines on the domain/vnet/cloud service/subscription are the cause of this problem, but I didn't see this problem until I had been using the domain for a while and spun up a number of machines.
One of the more common causes could be your AD DNS is returning an IP that cannot be resolved internally to join the domain. When you do an nslookup on yourdomain.local, does it respond with only IPs that can resolve on the internal, private network?

In Windows Azure, is it possible to have a load balanced endpoint that's only accessible by traffic from a connected virtual network?

I have a Cloud Service that is connected to a LAN through a virtual network. I have a web role that machines on the LAN will be hitting for tasks like telling the cloud service that data needs to be refreshed. It it possible to have and endpoint that's load-balanced, but that only accepts traffic through the virtual network?
Well... you have a few things to think about.
You could set up your own load balancer in a separate role, which then does the load balancing. You'd probably want two instances to deal with high availability, and if there was any stateful/sticky-session data you'd need to sync it between your two load balancers. OR...
Now: If your code needing load-balancing lived in a Virtual Machine, rather than in a web/worker role, you could take advantage of the brand-new IP-level endpoint ACL feature introduced at TechEd. With this feature, you can have an endpoint that allows/blocks traffic based on source IP address. So you could have a load-balanced endpoint balancing traffic between a few virtual machines, and you could then limit access to, say, your LAN machines, and even add your existing Cloud Service (web/worker) VIP so that your web and worker role instances could access the service, all through the endpoint without going through the VPN. This way, you'd get to take advantage of Azure's built-in load balancer, while at the same time providing secure access for your app's services.
You can see more details of endpoint ACLs here.
No. The load balancer for a cloud service is public only. You can't predict the ip addresses of the individual instances on the virtual network, so you can't even hook them into your own load balancer. Yes, you can do it with VMs (as David recommends) — but then you're doing old-school IIS, not a cloud service. I went through this in November 2012, and was unable to find a decent solution.

Resources