IPv6 support for Azure other than the load balancer thing - azure

I have two questions regarding azure and IPv6. I understand that you can assign private IPv6s to VMs and then connect them to a load balance with a public IPv6, but is there anyway to use IPv6 with other resources in Azure, such as, cloud services, virtual networks, Application gateway, NSG, VPN Gateway, App Services, SQL databases, and SQL DWH?
Also,
I see that it says you cannot upgrade VMs to IPv6 and you need to make new ones. Does that mean that you would only have to remake VMs that were created before the time in which IPv6 support was released? Or does it mean you have to remake every VM that wasnt created specifically with the feature of IPv6?

is there anyway to use IPv6 with other resources in Azure, such as,
cloud services, virtual networks, Application gateway, NSG, VPN
Gateway, App Services, SQL databases, and SQL DWH?
Unfortunately, we can't use IPv6 without LB right now, because the public IPv6 addresses cannot be assigned to a VM, they only can be assigned to a load balancer.
The IPv6 address of the VMs are private, the IPv6 Internet client cannot communicate directly with the IPv6 address of the VMs, the internet-facing load balancer routes the IPv6 packages to the private IPv6 addresses of the VMs using network address translation (NAT).
Does that mean that you would only have to remake VMs that were
created before the time in which IPv6 support was released? Or does it
mean you have to remake every VM that wasnt created specifically with
the feature of IPv6?
No, if we want to use IPv6, we should deploy LB and new vms, because we can't add other VMs to the Availability set which used for the LB. so
you may not upgrade existing VMs to use IPv6 addresses. You must deploy new VMs.

Related

Azure load balancer with IPv6 and IPv4 frontend support

Currently my LB has a IPv4 frontend address and one backend pool with 5 VMs with IPv4 private addresses.
We would like to add IPv6 support to our Service Fabric cluster. I found this article: https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-ipv6-overview and I see a lot of "Currently not supported" texts.
The IPv6 address is assigned to the LB, but I cannot make rules:
Failed to save load balancer rule 'rulename'. Error: Frontend ipConfiguration '/subscriptions/...' referring to PublicIp with PublicIpAddressVersion 'IPv6' does not match with PrivateIpAddressVersion 'IPv4' referenced by backend ipConfiguration '/subscriptions/...' for the load balancer rule '/subscriptions/...'.
When I try to add a new backend pool, I get this message:
One basic SKU load balancer can only be associated with one virtual machine scale set at any point of time
Questions:
When can we expect the feature to have multiple LBs before one VMSS?
Is it possible to add IPv6 frontend without adding IPv6 to the backend (NAT64?)?
Is it possible to add IPv6 addresses to an existing VM scale set without recreating it?
Not sure I am understanding you exactly, It seems that some limitations are in that article.
For your questions:
I guess you mean mapping multiple LB frontends to one backend pool. If so, the same frontend protocol and port are reused across multiple frontends since each rule must produce a flow with a unique combination of destination IP address and destination port. You can get more details about multiple frontend configurations with LB.
It is not possible. The IP version of the frontend IP address must match the IP version of the target network IP configuration.
NAT64 (translation of IPv6 to IPv4) is not supported.
It is not possible, A VM Scale Set is essentially a group of load balanced VMs. There are a few differences between VM and A Vmss, you can refer to this. Also, If a network interface has a private IPv6 address assigned to it, you must add (attach) it to a VM when you create the VM. Read the network interface constraints.
You may not upgrade existing VMs to use IPv6 addresses. You must
deploy new VMs.

Control outbound IP address of internal VMSS in Azure

I have a VMSS/svc fabric cluster on internal vnet (not public). The only inbound connections to the VMSS is from on prem through a Azure VPN Gateway.
How do I control the outbound IP address the VMSS go through when accessing the internet? In this case I do not want this traffic routed through a random IP address or through the VPN connection.
Basically I want to secure my Azure SQL so that the outbound internet IPs of the VMSS is whitelisted. And I don't want to add all Azure datacenter IPs.
You could look to use Forced Tunneling which would ensure that your control where the data egress occurs in your on-premises environment, however this would force any data in your Virtual Network back over your VPN connection which may not be desirable (or helpful if you don't control egress from there).
Failing this you could add a software-based firewall running on an Azure VM with a public IP onto the same VNet and then use User Defined Routes (UDRs) to force all traffic bound for the Internet to go via that and then use the public IP address in your SQL firewall.
Longer term you will be able to connect Azure SQL DB to VNets (or at least restrict access to it from one) - see the Uservoice site (and add your vote!)

Having on-prem IP to point to Azure VM

I have a case where I want to migrate on-prem servers to Azure, but I should still have the local IPs pointing to these VMs. I mean by the local IPs the country-range of IPs since these VMs should be accessed using country IPs for regulatory reasons.
I heard that this is possible, but I have no idea what type of resources I should use to allow this, VNET, VPN, ExpressRoute ?? And how to do it as I have no experience in networking what so ever.
Regards,
NAT is a method of remapping one IP address space into another by modifying network address information in Internet Protocol (IP) datagram packet headers while they are in transit across a traffic routing device.
You can setup a site-to-site VPN between on-prem and Azure Vnet, then deploy a server on-prem run as the NAT device.
It is possible, but with some complications and constraints:
You can run these servers/VMs in Azure using their public IP addresses. You need to create the Virtual Network using these address ranges, but it is possible. The catch here is that these public IP addresses are only accessible via cross premises connectivity solutions such as Azure VPN gateway or Azure ExpressRoute. You cannon access these VMs using their "public" IP addresses directly over the Internet. For this purpose, these public IP address ranges are really treated as "private addresses".
Once you create the virtual network with the public IP addresses (as private address space) in Azure, you will also need to make sure your routing in the on premises network is configured correctly to forward the traffic to these VMs over the VPN tunnels or MPLS/WAN network if you are using ExpressRoute.
If these servers/VMs need to accept requests directly from the Internet, the traffic from the Internet will still come to your on premises network because that's where your ISPs will direct the traffic. You will need to ensure these traffic will be routed correctly over the cross premises connectivity (VPN/ExpressRoute) to Azure.
Hope this helps a bit. Please let me know if this answers your question.
Thanks,
Yushun [MSFT]

Azure Reserved IP Address Inconsistency

I had a need to add additional public IP addresses to an Azure VM and found a working solution here:
Azure VM: More than one Public IP
Essentially this creates a reserved IP in Azure and then adds the reserved IP to a cloud service. Once it's bound to a cloud service it can be mapped to a VM endpoint.
This all works great but there is one bit I don't understand - The IP address of the reserved IP and the resultant VM endpoint don't match. I have to set up DNS to point to the IP address of the endpoint to make this work. Is there something I am not doing right, or is this just the way reserved VMs work?
It looks like this unanswered question is the same issue:
azure reserved IP for VM is diffrent than the given
Thanks!
The "Azure Cloud Service" is a container that provides internet connectivity to "Azure VMs". Thus, you assign the Internet facing Public IP to the Cloud Service. This article is relatively good at explaining the relationship: Azure Cloud Services
From above link:
Here’s a definition of an Azure IaaS cloud service that will make it easy for you to understand what it is in the context of Azure Infrastructure Services:
A cloud service is a network container where you can place virtual machines.
All virtual machines in that container can communicate with each other directly through Azure (and therefore don’t have to go out to the Internet to communicate with each other).
This container is also assigned a DNS name that is reachable from the Internet.
A rudimentary DNS server is created and can provide name resolution for all virtual machines within the same cloud service container (note that name resolution provided by the DNS server is only available to the virtual machines that are located within the cloud service).
One or more Virtual IP Addresses (VIPs) are assigned to the container and these IP addresses can be used to allow inbound connections from the Internet to the virtual machines.
Certain services (like FTP) may require your vm have a public IP: Azure VM Public IP
(IaaS v1) An Azure cloud service comes with a permanent DNS name - something.cloudapp.net - and has a single VIP allocated whenever there are VMs deployed in it OR whenever a reserved IP address is associated with it. Traffic is either load balanced or NATted (port forwarded) to the VM from the Azure Load Balancer sitting on the VIP. You can also associate a public instance-level IP address (PIP) with a VM, which gives it an additional IP address. The VIP always has a DNS name (something.cloudapp.net) while the PIP has one only if you specifically add it, I did a post which goes into these differences.
(IaaS v2) VMs are not deployed into cloud services and only have a public IP address if one is specifically added - either by configuring a PIP on the NIC of the VM (and optionally giving it a cloudapp.azure.com DNS name) or by configuring a load balancer and either load balancing or NATting traffic to it. This load balancer is configured with a public IP address and can optionally have a cloudapp.azure.com DNS name associated with it. (Ignoring internal load balancers in this discussion.)

Azure VMs Virtual Network inter-communication

I'm new to Azure (strike 1) and totally suck at networking (strike 2).
Nevertheless, I've got two VMs up and running in the same virtual network; one will act as a web server and the other will act as a SQL database server.
While I can see that their internal IP addresses are both in the same network I'm unable to verify that the machines can communicate with each other and am sort of confused regarding the appropriate place to address this.
Microsoft's own documentation says
All virtual machines that you create in Windows Azure can
automatically communicate using a private network channel with other
virtual machines in the same cloud service or virtual network.
However, you need to add an endpoint to a machine for other resources
on the Internet or other virtual networks to communicate with it. You
can associate specific ports and a protocol to endpoints. Resources
can connect to an endpoint by using a protocol of TCP or UDP. The TCP
protocol includes HTTP and HTTPS communication.
So why can't the machines at least ping each other via internal IPs? Is it Windows Firewall getting in the way? I'm starting to wonder if I've chose the wrong approach for a simple web server/database server setup. Please forgive my ignorance. Any help would be greatly appreciated.
If both the machines are in the same Virtual Network, then just turn off Windows Firewall and they will be able to ping each other. Other way is to just allow all incoming ICMP traffic in Windows Firewall with Advanced Settings.
However there is a trick. Both the machines will see each other by IP Addresses, but there will be no name resolution in so defined Virtual Network. Meaning that you won't be able to ping by name, but only by direct IP address. So, if want your Website (on VM1) to connect to SQL Server (on VM2), you have to address it by full IP Address, not machine name.
The only way to make name resolution within a Virtual Network is to use a dedicated DNS server, which you maintain and configure on-premises.
This article describes in details name resolution scenarios in Windows Azure. Your particular case is this:
Name resolution between virtual machines and role instances located in
the same virtual network, but different cloud services
You could potentially achieve name resolution, if you put your VMs is same cloud service. Thus you will not even require dedicated virtual network.
If your VMs are inside a Virtual Network in Azure, then you have to make sure two things.
Required Port is enabled.
Firewall is disabled on the server.
I was trying to connect to one VM where SQL Server DB was installed, from another VM. I Had to enable 1433 port in the VM where SQL was installed. For this you need to add an MSSQL endpoint to the VM on the azure management portal. After that i disabled windows firewall. Then i was able to connect to the VM from another.

Resources