I know that with the Premium tier, I could have up to 50 instances to put my web app on in Azure. If I needed to go beyond this, like 75 instances, what would be the most appropriate way to do this?
Maybe two different app service plans, different web app endpoints load balanced by Traffic Manager?
Thanks!
A Hosting Plan is simply a geographical collection of web servers. With in that hosting plan you can have 'x' number of servers (depending on the SKU)
The machines in a Hosting Plan will be split across fault and update domains. So that a server rack dying, or an upgrade rollout won't take out all of the servers in the hosting plan.
However what this doesn't protect you against is geographically scaled issues. If you have a hosting plan in West Europe and the West Europe region suffers an outage. At that point you could lose your entire deployment.
This is where them being a geographical collection of servers becomes an important characteristic. If you create a number of hosting plans in a number of regions, not only will you have local redundancy against fault and update outages but you will also gain redundancy against geographical outages.
Obviously if you need 500 servers, there is nothing stopping you creating 10 premium SKU hosting plans and deploying them all to the West Europe region and creating some sort of round robin DNS load balancing solution.
But the better solution is to share them across regions. Creating a hierarchy of traffic manager profiles to share the load amongst them. With the right automation you can have some regions coming on and off line as your load increases / decreases.
Personally, unless I have specifically required premium features (Biztalk etc) my preference has always been to simply deploy more service plans. It is far more cost effective.
Related
I want to know if there's a way commonly used for prod environments where if I reach the 125 instances limit for SKU v2, I can keep scaling, or at least I can keep the performance. I tried looking at Azure docs but they don't seem to address this problem at all.
You could setup multiple App Gateways to expand your number of instances and load balance them using Azure Traffic Manger. A reference architecture shows multiple regions but Traffic Manager can be used for resources in the same region. I don't know your backend architecture so this may or may not work.
Need some pointers on how one could achieve "true" multi region setup for AADDS.
As per Microsoft's documentation, AADDS is "designed" to be "single regioned". Although it provides some (arguably) redundancy by spinning up essentially 2 managed domain controllers, it does not take into account performance.
Microsoft recommends (and there isn't really any other way to do this) setting up VPN's or VNET peering in order to access your AADDS from other regions, but this has huge impact over performance, and also over actual redundancy (HA designs should be multi region imo, and AADDS should be HA).
We're deploying Windows VM's in (at the time of writing this question) 10 regions, with AADDS in West Europe. We're seeing huge penalties for our apps that require/rely on LDAP ( >10s in some regions) for even the most basic LDAP queries with quite the small return payload.
Was hoping someone figured out a way to mirror/cache AADDS in a new region, like maybe adding a new worker DC or some black magic, so that VMs and services would connect more locally?
Cheers!
Azure AADDS Multi-Region Support is already a requested feature and is under works currently. However, there is no ETA to share at the moment. You can follow What's new in Azure Active Directory? for updates.
The only option to achieve Geo-redundancy is by deploying ADDS across multiple regions via IaaS VMs, Vnet pairing, and VPN gateways.
Also, for high availability, each Azure AD Domain Services managed domain includes two domain controllers. You don't manage or connect to these domain controllers, they're part of the managed service. If you deploy Azure AD Domain Services into a region that supports Availability Zones, the domain controllers are distributed across zones. In regions that don't support Availability Zones, the domain controllers are distributed across Availability Sets. You have no configuration options or management control over this distribution.
According to Azure AADDS FAQ documentation, they do support a fail-over to another geo location.
You can follow this tutorial page in order to create a replica set for your AADDS deployment.
We have a standard 3 tier web application that need to be migrated into cloud (more of VM based lift and shift instead of cloud native at this point).
Wondering which factors should I consider to make a decision if Azure Scale Set or Azure Availability Set should be used for Web and Application tiers.
Probably answer to questions like:
Can availability set autoscale like Scale set?
Any overhead of using either option for a simple web application?
Will both need load balancer in front of them ?
Might help to take a decision.
Any suggestions please?
You can refer to the N-tier architecture on virtual machines. Each of tier consists of two or more VMs, placed in an availability set or VM scale set. The load balancer is used to distribute requests across the VMs in a tier. Each tier is also placed inside its own subnet, and add NSG rules to restrict access to each tier and route tables to individual tiers.
For your questions:
No, The main difference is that a Scale Set have Identical VMs which makes it easy to add or remove VMs from the set whereas an Availability Set does not require them to be identical. An availability set is spread across fault domains that shared a set of hardware components, which means when you have more than one VM in different fault domains in a set it reduces the chances of losing all your VMs in event of a hardware failure in the host or rack. A regional (non-zonal) scale set uses placement groups, which act as an implicit availability set with five fault domains and five update domains. Refer to this question.
It's recommended to use VM Scale Sets for autoscaling. VMSS can automatically create and integrate with the Azure load balancer or Application Gateway.
Yes, both need Azure LB in front of them.
Generally speaking, both scenarios do not offer any way to magically make this happen, so you are kinda forced to use webapps if you want minimum overhead.
yes it can, but you need to prestage vms
yeah, you need to configure vms and for vmss you need automation so that scaling can happen automatically
yes, both will need a load balancer (web apps - not).
But your app might not work with webapps, so you are kinda forced to use vms or vmsses
I have a website hosted in Azure, which is globally load balanced across 3 different Azure data centres.
We can see from the following DNS check, that requests coming from the US, resolve to my West-US data centre. In and around Europe go to my European DC. South east asia goes to East Asia fine, but the entire of Australia gets routed to the US.
https://www.whatsmydns.net/#A/www.whatsonglobal.com
Being an Australian resident, and im sure for our Australian customers this isn't great. Especially since it's currently adding an extra 3 seconds of load time to the homepage.
How do I fix this without having to choose a different load balancer? I like the simplicity of the Azure traffic manager, but only if it's up to scratch.
Patrick, first off, the whatsmydns.net URL shows that the entire of Australia is not going to the US. 2 locations are going to Europe, and 1 location is going to US.
Azure constantly probes the LDNS servers around the world from all datacenters and regularly updates the performance tables in order to route users to the 'closest/fastest' datacenter. The fastest datacenter is usually based on the routing and peering relationships between ISPs, so it may not always be the geographically closest datacenter.
Most likely your users are getting faster performance from the website selected by the WATM endpoint than they would be from any other website, but you can validate this by trying to browse directly to the website URLs. If you find that WATM is not sending users to the fastest datacenter then you can open a support incident to have the Azure team investigate the routing and latency table.
Patrick - the behaviour is most likely due to there being no local footprint (yet) for Azure in Australia. There are CDN endpoints located in Australia and that's about it right now.
If you've got a site hosted in Azure then you'll be in Singapore, West US or any of the existing Regions anyway so the latency for users in Australia wouldn't be affected by hitting Traffic Manager in the US.
I am reading the explanation of Availability Sets on Microsoft' website but can't 100% understand the concept.
http://www.windowsazure.com/en-us/documentation/articles/manage-availability-virtual-machines/
There are many questions people ask in comments, but there is no technical support from Microsoft is there to answer them.
As I properly understand with availability sets you can duplicate your VM with IIS application and VM with SQL, which means you have to use 4 VM(pay for 4) instead of 2. This means that whenever IIS1 virtual machine is down, website will still be online with help of IIS2 virtual machine and vice versa? Same goes for SQL1 and SQL2 virtual machines?
Am I going to the right direction? If this is the case, how do I keep the data synchronized in SQL1 and SQL2, IIS1 and IIS2 virtual machines at the same time, so website will still be up with latest data and code if one VM is down for updates?
An availability set combines two concepts from the Windows Azure PaaS world - upgrade domains and fault domains - that help to make a service more robust. When several VMs are deployed into an availability set the Windows Azure fabric controller will distribute them among several upgrade domains and fault domains.
A fault domain represents a grouping of VMs which have a single point of failure - a convenient (although not precisely accurate) way to think about it is a rack with a single top or rack router. By deploying the VMs into different fault domains the fabric controller ensures that a single failure will not take the entire service offline.
The fabric controller uses upgrade domains to control the manner in which host OS upgrades (i.e., of the underlying physical server) are performed. The fabric controller performs these upgrades one upgrade domain at a time, only moving onto the next upgrade domain when the upgrade of the preceding upgrade domain has completed. Doing this ensures that the service remains available, although with reduced capacity, during a host OS upgrade. These upgrades appear to happen every month or two, and services in which all VMs are deployed into availability sets receive no warning since they are supposedly resilient towards the upgrade. Microsoft does provide warning about upgrades to subscriptions containing VMs deployed outside availability sets.
Furthermore, there is no SLA for services which have VMs deployed outside availability sets.
As regards SQL Server, you may want to look into the use of SQL Server Availability Groups which sit on top of Windows Server Failover Cluster and use synchronous replication of the data. For IIS, you may want to look at the possibility of deploying your application into a PaaS cloud service since that provides significant advantages over deploying it into an IaaS cloud service. You can create a service topology integrating PaaS and IaaS cloud services through the use of a VNET.
Availability set is combination of these two feature
Fault Domain(you have option to select max 3 when creating new Availability Set)
Update Domains (you have option to select max 20 when creating new Availability Set)
Fault Domain is the physical(like rack, power) set lets you selected 2 fault domain in your availability set and your machine in that availability set will have value 1 and 2 so at least one can be available in case of power failure at any physical set.
Update Domain is set which will be updated by azure system update at once.
if select 4 update domains and your 2 VM have value 2,3 that means they will not be updated together for any planed maintenance
For high availability duplicate VM should not be on same Fault Domain or same Update Domain
Now You can not change availability set after creation of a VM it should be set at the time of creation