Does Azure Isolated App Service Plan provides dedicated (not shared) hardware? - azure

I know that Azure Isolated App Service Plan provide several benefits like:
Network level isolation
More worker units
Better performance
but i wonder if the isolation is also on the hardware level or not ?

With Azure App Service Environment version 3 (ASEv3), you can deploy it on a dedicated host group. Host group deployments are not zone redundant for ASEv3.
Azure Dedicated Host is a service that provides physical servers - able to host one or more virtual machines - dedicated to one Azure subscription. Dedicated hosts are the same physical servers used in our data centers, provided as a resource. You can provision dedicated hosts within a region, availability zone, and fault domain. Then, you can place VMs directly into your provisioned hosts, in whatever configuration best meets your needs.
This is NOT available with ASEv2.
Reference:
https://learn.microsoft.com/en-us/azure/app-service/environment/overview
https://learn.microsoft.com/en-us/azure/virtual-machines/dedicated-hosts

Related

When to choose Azure VM Scale sets and Azure Virtual Desktop Host pools

I am basically looking a brief explanation on when to choose Azure scale sets and Azure Virtual Host pools.
You create Azure VM Scale Sets when you want to build large-scale services that are highly available. Using VM scale sets would let you create and manage a group of load balanced VMs and maintain a consistent configuration across all instances.
Scale sets can automatically increase the number of VM instances as application demand increases, then reduce the number of VM instances as demand decreases. This ability helps reduce costs and efficiently create Azure resources as required matching your customer demand.
#mmking is right in that Host pools are specific to WVD. Host pools are like a collection of identical virtual machines within Windows Virtual Desktop environments.
Additional resources:
Virtual machine scale sets
WVD host pools

Azure AADDS multi region

Need some pointers on how one could achieve "true" multi region setup for AADDS.
As per Microsoft's documentation, AADDS is "designed" to be "single regioned". Although it provides some (arguably) redundancy by spinning up essentially 2 managed domain controllers, it does not take into account performance.
Microsoft recommends (and there isn't really any other way to do this) setting up VPN's or VNET peering in order to access your AADDS from other regions, but this has huge impact over performance, and also over actual redundancy (HA designs should be multi region imo, and AADDS should be HA).
We're deploying Windows VM's in (at the time of writing this question) 10 regions, with AADDS in West Europe. We're seeing huge penalties for our apps that require/rely on LDAP ( >10s in some regions) for even the most basic LDAP queries with quite the small return payload.
Was hoping someone figured out a way to mirror/cache AADDS in a new region, like maybe adding a new worker DC or some black magic, so that VMs and services would connect more locally?
Cheers!
Azure AADDS Multi-Region Support is already a requested feature and is under works currently. However, there is no ETA to share at the moment. You can follow What's new in Azure Active Directory? for updates.
The only option to achieve Geo-redundancy is by deploying ADDS across multiple regions via IaaS VMs, Vnet pairing, and VPN gateways.
Also, for high availability, each Azure AD Domain Services managed domain includes two domain controllers. You don't manage or connect to these domain controllers, they're part of the managed service. If you deploy Azure AD Domain Services into a region that supports Availability Zones, the domain controllers are distributed across zones. In regions that don't support Availability Zones, the domain controllers are distributed across Availability Sets. You have no configuration options or management control over this distribution.
According to Azure AADDS FAQ documentation, they do support a fail-over to another geo location.
You can follow this tutorial page in order to create a replica set for your AADDS deployment.

Is it normal to have two azure app services with the same IP?

I am new on azure.
I have multiple web apps on my azure subscription. Strangely, I have found two of them have the same external IP. They are now sharing any resources with each other. How this is possible, and how to change it, if there is away?
Azure App Services are deployed to scale units. A scale unit is a collection of servers inside an azure datacenter. A default scale unit can contain up to 1000 servers (or more). Servers inside a scale unit can have different roles, the most important are the worker role and the front end role. Servers with the worker role run customers applications while servers with the front end role act as load balancer and distribute incoming requests to the applications running on the servers with the worker role.
It is important to note here that each scale unit has only one inbound virtual IP address. This means that when you are running applications in an app service plan these applications do not only share the IP with other applications of this app service plan, but also with applications from other customers whose apps run inside the scale unit.
For SSL connections, usually SNI (Server Name Indication) is used, which is supported by all major web browsers.
Now if you want to get a dedicated IP address for your web app, there are two ways you can achieve this:
When using a custom domain, you can bind a certificate with IP SSL to your app service. In this case, the app service generates a static IP address for you and you have to remap the A record of your custom domain to this new address. Beware that your IP address can change when you delete this binding.
Use an App Service environment which enables to run your apps in your own Azure Virtual Network. To make use of this you need to pay for an isolated app service plan which can be quite cost intensive.

How Failover works when Primary VM Set get restarted?

Above is sample configuration for Azure Service Fabric.
I have created with Wizard and I have deployed one Asp.net core Application and that I am able to access from out side.
Now if you look at the image below Service Fabric is being access with sfclustertemp.westus2.cloudapp.azure.com. I am able to access application with
sfclustertemp.westus2.cloudapp.azure.com/api/values.
Now if I restart primary VM set it should transfer load to secondary and I have a thought that it should done automatically but it is not as Second Load Balancer has different dns name. ( If I specify different dns name then it is accessible).
I have understanding cluser has one id so it is common for both load balancer.
Is such configuration possible ?
Maybe you could use Azure Traffic Manager with health probes.
However, instead of using multiple node types for fail-over options during reboot, have a look at 'Durability tiers'. Using Silver or Gold will have the effect that reboots are performed sequentially on machine groups (grouped by fault domain), instead of all at once.
The durability tier is used to indicate to the system the privileges
that your VMs have with the underlying Azure infrastructure. In the
primary node type, this privilege allows Service Fabric to pause any
VM level infrastructure request (such as a VM reboot, VM reimage, or
VM migration) that impact the quorum requirements for the system
services and your stateful services.
There is misconception on what is a SF cluster.
On your diagram, the part you describe on the left as 'Service Fabric' does not belong there.
Service Fabric is nothing more than applications and services deployed in the cluster nodes, when you create a cluster, you define a primary node type, will be there where service fabric will deployed the services used for managing the cluster.
A node type will be formed by:
A VM Scale Set: machines with OS and SF services installed
A load balancer with dns and IP, forwarding requests to the VM Scale Set
So what you describe there, should be represented as:
NodeTypeA (Primary)
Load Balancer (cluster domain + IP)
VM Scale Set
SF management services (explorer, DNS)
Your applications
NodeTypeB
Load Balancer (other dns + IP)
VM Scale Set
Your applications
Given that:
the first concern is, if the Primary Node goes down, you will lose your cluster, because the management services won't be available to manage your service instances.
second: you shouldn't rely on node types for this kind of reliability, you should increase the reliability of your cluster adding more nodes to the node types.
third: if the concern is a data center outage, you could:
Create a custom cluster that span multiple regions
Add a reverse proxy or API gateway in front of your service to route the request wherever your service is.

Agent VMs of different sizes in Azure Container Service

Is there any way to have VMs of different sizes in the private agents pool of an Azure Container Service (ACS)? I would like to support use cases where some services require compute intensive servers and others (e.g databases) memory intensive servers.
An acceptable solution could be to add multiple virtual machines scale sets (vmss) as private agents pools and each one of them have VMs of different sizes since a vmss supports one size of VM. Is such a feature supported in ACS?
A workaround could be to have different sizes of VMs in the public and private agent pools. However, this is not a best practice since public agents pool should be used to host services that are exposed publicly (e.g marathon-lb). Also, it limits the options to just two pools.
This feature is coming and if you need them today you can use ACS Engine (the open source code behind ACS). See examples at https://github.com/Azure/acs-engine/tree/master/examples/largeclusters

Resources