I have two kind of services deployed on Azure, and they need communicate with each other. So far two options:
1. Put them in same Vnet
2. Put them in separate Vnet and create Vnet-to-Vnet connection between them.
My question is, is there any performance/bandwidth limitation on option 2 compared to option 1?
I would say approach 1 is better in case you have less load in your vNet.
In Approach 2 the communication depends on below factors-
1. Whether the two vNets are in same region or not. If yes obviously you will face less latency.
2. Which vNet gateway you have selected- high performance vNet gateway provides faster speed over basic and standard.
Azure point to site vpn connections are reliable only when the data to be transferred is not much ( < 1Tb ) and are in the same region.
Also consider that with option 2 you will face egress data charges for traffic exiting the VNET.
If you're workloads are similar or are even in two separate tiers (web vs app vs data for example), you can segregate them via subnet and NSG. You don't need to separate out into different VNETs.
Related
I was told recently that the Hub VNET is only used in case there is on-premise networking to/from considerations.
I am quite surprised as were many, at the table.
I was under the impression if I have, say, a AZURE Cloud only env. that I could still have a Hub Spoke approach. Or is this not so? What would be the preferred non-Hub Spoke approach if there is peering or inter-VNET access required?
I am aware of VNET Peering and other methods to access resources in other VNETs, API's and Private Link.
The hub-spoke approach works great in some scenarios in cloud-only environments - although in most of docs or architectural patterns Microsoft shows it together with on-prem connectivity.
I used it frequently when we shared some resources like ACR, Log Analytics or simply to host a jump host (with Bastion) to access resources in other networks.
One of the most common scenarios is also the Azure Monitor Private Link Scope, where the hub-spoke topology is recommended:
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/private-link-design#hub-and-spoke-networks
In an Azure Cloud only environment, you can still have a Hub-Spoke approach and this is the recommended one.
While you can cross-peer different spokes to form a Mesh for spokes to exchange data (in a non-Hub scenario), this will become complicated as the number of spokes increases. You will have to configure 1:n Peering in every VNet.
With Hub-Spoke model, you have to route spoke-spoke traffic via Hub Vnet, but the advantage here is that the Hub Vnet becomes the single point entry for the environment and you can deploy resources here that would be shared and used by all other VNets (such as custom DNS server, Firewall)
I am having difficulty understanding Azure Availability sets, specifically, what exactly i need to to do ensure my app running on my vm is utilizing Availability sets to be more available.
Lets say i am creating an application that runs on a single VM and i want to make it more resistant to hardware failure.
Option 1:
I create an Availability Set with 2 fault domains and then create a VM on this Availability set.
Is that it?
If there is a hardware failure on the rack hosting my VM, does azure now take care of ensuring the VM stays up and running?
Option 2:
i have to have two servers Vm1 & Vm2, both in the availability set but one on fault domain 1, one on fault domain 2.
i have to then set up a cluster of sorts for my application. In this case the availability set is simply allowing me to be sure that the two servers in my cluster are not on the same hardware, but the plumbing to ensure the application can take advantage of two servers and is highly available is still down to me.
Is option 1 or option 2 the correct way in which Availability Sets work in relation to fault domains?
Appreciate any clarity that can be provided.
Azure deals with hardware failure in two ways, Availability Sets and Availability Zones. AS is all about making sure that your app does not go down even if hardware failure happens within a Data center aka Zone itself. AZs are all about making sure your app does not go down even if the whole data center aka Zone is down. More details here.
Now to understand best practices around availability take a look at the best practices, specifically for VMs can be found here.
A Single VM instance is defined as follows, reference:
"Single Instance" is defined as any single Microsoft Azure Virtual Machine that either is not deployed in an Availability Set or has only one instance deployed in an Availability Set.
So one VM in or not in an availability set does not make any difference, for this you need at least two VMs and which are in an AS using FDs and UDs so Azure will take care of this by making sure that both VMs are running on separate Hardware to avoid your app going down.
One VM in an Availability set is nearly as good as a VM with no Availability set.
If you are placing two or more VMs in an AS and those are identical then you can add a load balancer to distribute traffic.
You can also use AS without a Load balancer if you are not interested in traffic distribution. One scenario can be where you want to switch to a secondary VM only when primary is unavailable.
Also, do understand it is not required to have identical VMs in an AS.
Virtual machine scale set is a good option if you are looking for a high availability solution with VMs.
We have a standard 3 tier web application that need to be migrated into cloud (more of VM based lift and shift instead of cloud native at this point).
Wondering which factors should I consider to make a decision if Azure Scale Set or Azure Availability Set should be used for Web and Application tiers.
Probably answer to questions like:
Can availability set autoscale like Scale set?
Any overhead of using either option for a simple web application?
Will both need load balancer in front of them ?
Might help to take a decision.
Any suggestions please?
You can refer to the N-tier architecture on virtual machines. Each of tier consists of two or more VMs, placed in an availability set or VM scale set. The load balancer is used to distribute requests across the VMs in a tier. Each tier is also placed inside its own subnet, and add NSG rules to restrict access to each tier and route tables to individual tiers.
For your questions:
No, The main difference is that a Scale Set have Identical VMs which makes it easy to add or remove VMs from the set whereas an Availability Set does not require them to be identical. An availability set is spread across fault domains that shared a set of hardware components, which means when you have more than one VM in different fault domains in a set it reduces the chances of losing all your VMs in event of a hardware failure in the host or rack. A regional (non-zonal) scale set uses placement groups, which act as an implicit availability set with five fault domains and five update domains. Refer to this question.
It's recommended to use VM Scale Sets for autoscaling. VMSS can automatically create and integrate with the Azure load balancer or Application Gateway.
Yes, both need Azure LB in front of them.
Generally speaking, both scenarios do not offer any way to magically make this happen, so you are kinda forced to use webapps if you want minimum overhead.
yes it can, but you need to prestage vms
yeah, you need to configure vms and for vmss you need automation so that scaling can happen automatically
yes, both will need a load balancer (web apps - not).
But your app might not work with webapps, so you are kinda forced to use vms or vmsses
Sorry if this is vague...I am currently looking at an AZURE architecture design that has 3 VNETs. I am looking for each VNET pass through a firewall server.
Basically I am trying to figure out if 1VM can be part of 3 Virtual networks without multiple NICs, or if AZURE doesn't support this function yet at all.
A VM can be a part of three subnets in a single network, if you have three nics. So at the minimum you would need an A4/extra large which has a 4 nic capacity.
You could then link the vnets together to create a logical grouping.
But it is not possible to have a single VM in multiple VNets.
I have a three Azure VMs, two of them have a load balanced endpoint. The third needs to communicate to this load balanced endpoint via the lb's public IP address. In Azure, will this traffic exit the datacenter? My assumption is that it would get to the edge of the datacenter network but would travel no further and is "relatively" secure. Unfortunately, I can't seem to find any documentation that would confirm this.
Is my assumption correct and can anyone provide validating documentation?
Traffic within a region will not leave the datacenter and will not incur charges. There is a research paper at http://research.microsoft.com/pubs/80693/vl2-sigcomm09-final.pdf that formed the basis of the new Q10 Azure network architecture. It doesn't specifically talk about Azure endpoints, but you can see that the architecture allows for efficient routing of traffic purely within the datacenter.
You can also watch http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/AZR302 where Mark discusses the new Q10 architecture starting at 27:30. Where he discusses storage clusters you can think of it in the same way as the public IP for your service since they both work the same way at the network layer.
You can also see http://davidpallmann.blogspot.com/2010/08/hidden-costs-in-cloud-part-2-windows.html, which is not official documentation, but does describe the traffic and costs.
Accessing a VM using its external IP address from a VM inside the data center will effectively exit the datacenter and this will result in outbound traffic charges and some latency.