Hub VNET in cloud-only environment approach - azure

I was told recently that the Hub VNET is only used in case there is on-premise networking to/from considerations.
I am quite surprised as were many, at the table.
I was under the impression if I have, say, a AZURE Cloud only env. that I could still have a Hub Spoke approach. Or is this not so? What would be the preferred non-Hub Spoke approach if there is peering or inter-VNET access required?
I am aware of VNET Peering and other methods to access resources in other VNETs, API's and Private Link.

The hub-spoke approach works great in some scenarios in cloud-only environments - although in most of docs or architectural patterns Microsoft shows it together with on-prem connectivity.
I used it frequently when we shared some resources like ACR, Log Analytics or simply to host a jump host (with Bastion) to access resources in other networks.
One of the most common scenarios is also the Azure Monitor Private Link Scope, where the hub-spoke topology is recommended:
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/private-link-design#hub-and-spoke-networks

In an Azure Cloud only environment, you can still have a Hub-Spoke approach and this is the recommended one.
While you can cross-peer different spokes to form a Mesh for spokes to exchange data (in a non-Hub scenario), this will become complicated as the number of spokes increases. You will have to configure 1:n Peering in every VNet.
With Hub-Spoke model, you have to route spoke-spoke traffic via Hub Vnet, but the advantage here is that the Hub Vnet becomes the single point entry for the environment and you can deploy resources here that would be shared and used by all other VNets (such as custom DNS server, Firewall)

Related

How to connect overlapping VNets in Azure?

I am working on an Azure-based networking solution.
We have a typical hub and spoke VNets topology. The Hub VNet connects to on-prem DC via ExpressRoute and spoke VNets peer to Hub VNet. There is an Azure Firewall in the Hub that filters traffic between Hub-spokes and hub-on-prem segments. GREEN in the diagram
We have a bizarre requirement of adding a new isolated VNet (RED in the diagram) that will have overlapping IPs with the existing network (GREEN). We want to allow workloads in this new VNet to access private apps deployed in Hub or on-prem.
I need help on how to achieve this connectivity.
Note: We don't want to set up any VPN between the new VNet and Hub
As you might appreciate, this is more of a general networking limitation moreso than an Azure limitation. If we want two different networks with overlapping IP addresses to communicate then we would need networking devices in between both networks that perform some form of network address translation so the IP addresses appear to be different to the communicating hosts. Below is an example from the Azure documentation
Logically you have two options here:
Create your own network devices and configure routes between these subnets to transit your virtual appliance that does the translation.
Use the managed service from Azure. In this case, it's the Azure VPN Gateway
I saw your note above for not wanting to use any VPN devices. Having said that, however, generally speaking it is usually a better option from an availability & supportability perspective to leverage the built-in offering vs. hand rolling your own virtual appliance using IP tables or a Windows NAT Router or something similar. Hope this clarifies.
It is not possible to peer Virtual Networks with overlapping IP addresses. This is documented here. You will have to move to a different address space and move/recreate resources under this new address space.
If it helps you can take a look at this Checklist before moving resources.

Azure Networking - Application GW, Virtual Network GW, VWAN, ExpressRotue, PrivateLink, Arc

can anyone explain difference between Azure Application Gateway, Virtual Network Gateway, Virtual WAN, ExpressRoute, Arc and Private Link, please?
It seems to me all services are pretty similar helping with connecting either on-prem to Azure, in-Azure to in-Azure or public to Azure.
They're similar in that they all involve network traffic, but that's pretty much where the similarities end.
Application Gateway is a Layer 7 load balancing service with advanced features like SSL termination. It's used to route client requests to your applications.
Virtual Network Gateway is a VPN gateway for point-to-site (user) and site-to-site (office/datacenter) VPN connections to your own Azure VNETs. This would, for example, allow you to RDP into Azure VMs from your on-prem office using their private IPs.
ExpressRoute is similar to site-to-site, however it doesn't use IpSec tunnels, it's a dedicated, unencrypted connection from your location directly into Microsoft's backbone. (i.e. you don't traverse the public internet). There's no encryption and the connection is faster. This is a service you need to work with a 3rd party internet provider to implement.
Virtual WAN is more like a networking hub where there would be many site-to-site, point-to-site, ExpressRoute, etc... connections spanning a wide area (as the name implies). This would be for large enterpise organizations with many on-prem locations.
Arc is a means of adding your on-prem resources into Azure for management. e.g. you have a physical server somewhere and you want to manage it though ARM/portal.
Azure Private Link is a feature of many Azure services (storage, SQL PaaS, etc..) which allows you to create a private DNS record and assign a private IP address on your internal VNETs. This is used when you want to disable all public network access to a resource and only allow access from within your own VNET.
I have barely scratched the surface of the differences here, but suffice it to say, there are many differences. From this page, you can type the service name into the search and get more specific details on the offering. Hope this helps.
https://learn.microsoft.com/en-us/search/?terms=networking%20in%20azure

Azure - CentOS - Secure VM to VM communication

We are setting up infrastructure in Azure with Hub-n-spoke model and planning to implement hub-n-spoke model with one hub and two spoke. The requirement is,
— Spoke to Spoke communication to be secure: The communication between spoke (VNET-to-VNET) will happen through VNET peering, and we found that MACsec encryption at layer2 gets applied by MS Azure by default. I think we are good on this, correct me if I am missing something. Reference
— Communication between VMs inside VNET to be secured: We see some guidance from MS Azure for Windows VMs to make use of SMB3.0 for securing data in transit between the VM inside VNET. However, we don't see any recommendation or guidance from MS Azure to secure data in transit between Linux (CentOS) VM within VNET. Any inputs to achieve this would be a great help.
Note: As per the article from gooogle (especially referring to the diagram), we see VM to VM communication gets encrypted by default inside GCP VPC. We are looking for similar feature in Azure. Please share inputs on this to achieve.

Does an Azure vnet affect performance

Does an Azure Vnet improve or degrade performance compared to a connection via public endpoint?
By performance I mean latency or throughput.
For example when connecting from a web app to a database.
If communicating resources are on the same VNet OR in VNets that are on the SAME Azure Region, there will be NO degradation.
On the oder hand, if the peer VNets are on different Azure Regions, there will be degradation because the peers are in different data centers.
Vnets are primarily used to add an additional layer of security. They do not offer performance benefits - but if you use Vnet connectivity in any part of your application then you need to be aware of the correct configuration to avoid (unnecessary) degradation.
Lets' say for example you have a simple web app made up of an App Service instance with a SQL Database. If you connect your App to a Vnet to access some on-premise resource (via VPN or Expressroute), and that vnet is configured with forced tunnelling to on-premise then you will have degradation: Traffic from the web app to SQL db is getting hair-pinned via your on-premise network. If you then set-up a Service Endpoint for the SQL database on your VNet, the traffic will stay in Azure and you will get optimal routing Source. However it won't be any faster than if you had no VNet.
For a more detailed explanation take a look at this blog: Improve security and performance with Virtual Network Service Endpoints and Firewalls for Azure Storage

Segmentation of Azure Subnet for applications

We manage big environments inside Azure with multiple customers, we are redesigning it and in it we wanted to manage traffic within multiple common subnets like app, web and db subnets.
So essentially no two different application inside any common subnet like db cannot communicate with each other.
By default, resources in the different subnets from the same VNet could communicate with each other. So you need to use an Azure network security group to filter network traffic to and from Azure resources in an Azure virtual network or subnet.
Application security groups enable you to configure network security as a natural extension of an application's structure, allowing you to group virtual machines and define network security policies based on those groups. You can reuse your security policy at scale without manual maintenance of explicit IP addresses. To learn more, see Application security groups.
For PaaS like Azure app service or Azure SQL database, you could use VNet Integration to access VNet resources in a private network or use virtual network service endpoints and rules for servers in Azure SQL Database.
For more information, you may know:
https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/dmz/secure-vnet-dmz
https://learn.microsoft.com/en-us/azure/networking/networking-overview

Resources