I am searching for a solution with static private IPs for my container instances. I will add the Application Gateway to it to also have a static public IP for it.
I am checking https://stackoverflow.com/a/59168070/7267638 and it looks good until "Add the private IP of the container instance into the backend pool of the application gateway". The point which is not clear to me is what to do when I restart the container and add others in the meantime - it can end up with different private IPs.
I need to have them static not only to be able to configure backend pool for the Gateway, but also for internal routing purposes. Without some kind of static config, I would need to reconfigure all services after private IP change to be able to find each other again.
Maybe can I use some kind of internal DNS or use container names or so?
Private static IPs for ACI is (as of today) not supported. I don't think there is a real workaround here except for checking after a container has been (re-)started if the IP has been changed.
Your best bet might be to use subnets of the minimum required size when putting ACI into a subnet - and only use one ACI per subnet. This way the chance might be lower that the IP actually changes, but still no guarantees there.
I have been having the same issue and solved it with the alternative #silent mentions. I created a 29'er subnet, which is the smallest you can create on Azure with 3 available addresses (the other 5 are reserved), per Azure Container Instance I am hosting. I register all three available addresses in the backend pool in application gateway, so that it can forward requests to the IP address of the instance. The built-in probing seems to just do this.
I have implemented the following
Azure Alert that monitors the ACI Restart event
Triggers an Azure Function
Azure Function keeps Azure Private DNS up-to-date with latest IP
The function calls the API and get the new IP, then updates DNS. I have a short lived TTL on private DNS. The zone is only within my VPN.
This is not a perfect solution as this can mean 5 mins of downtime.
However, I also have Azure Application Gateway and 3 instances. It's unlikely that all three instances would restart at the same time, and if they did, downtime would be inevitable.
Related
In my environment, A Container Instance in the backend pool of the Application Gateway changes its Private IP when it is restarted.
Therefore, the communication from the frontend is interrupted every time it is restarted.
Is there a way to communicate with Application Gateway even if the private IP of Container Instance changes?
For example, activity log alerts detect the restart of Container Instance, and Automation runbook changes the routing rule of Application Gateway.
Thank you in advance!
I tested in my environment there is no change in private ip address after the container instance restarting.
Found one SO thread it is stating like below mention
When putting ACI into a subnet, your best bet may be to use subnets of
the smallest required size - and only use one ACI per subnet.
Except for checking if the IP has changed after a container has been
restarted, I don't believe there is a real workaround here.
How to make public link and private endpoint link of an azure Redis service work simultaneously.
Can we keep both functionalities work simultaneously like outside users using the original public IP link and internal users using a private endpoint link to connect to an Azure Redis instance or azure storage account?
I have this kind of scenario
Here you have to enforce somehow the DNS resolution happening within your internal network so that any internal calls fetches the record created in the Private DNS Zone alongside the Private Endpoint.
Within your Virtual Network, you can for example change the DNS servers. Be careful, it might have severe impacts, as you might also need to re-specify how to reach other resources within Azure (since Azure DNS 168.63.129.16 is gone at that point). Be also aware about the precedence if you have several DNS servers at different levels.
I have an application gateway which has a routing rule. The routing from application gateway to the VM is based up FQDN (I use Azure private DNS to internally map the VM IP to the FQDN).
To switch traffic to a different VM (as part of an upgrade pipeline) I update the private dns with the new machines IP.
This results in the backend health failing.
Oddly, manually updating the backend pool, or the routing rule in exactly the same form resolves this issue.
Any ideas whats going on? It feels like its caching the DNS?
There are at least 2 solutions.
Stop/Start application gateway: https://learn.microsoft.com/en-us/cli/azure/network/application-gateway?view=azure-cli-latest
Re-write any of the application gateways config as part of the deployment pipeline
In my case I chose to switch from routing based of FQDN to that the IP address of the VM.
This makes use option 2.
I have an Azure Cloud Service, mywebapp.cloudapp.net, that consists of two Azure VMs - mywebappvm1 and mywebappvm2. Both VMs are in the same Availability Set and have the same DNS name.
I also have a Regional Reserved IP address assigned to the Cloud Service so that I can give our clients a guaranteed IP address that our app uses.
Part of the app uses a private background process, currently only running on one of the VMs. I want to be able to make a connection to that process over TCP running on mywebappvm1 from mywebappvm2. I could use the public IP and an endpoint on mywebappvm1 but I don't want the background service to be publicly accessible.
I'm currently using the private IP address, but is that safe? Will the private IP of each VM change if it's rebooted? I can't see an easy way of fixing the private IP of each VM - that seems like something you can do with a VNET but I can't find any information on how to do it with a cloud service and an availability group as well.
Is there perhaps another way to run a web app on multiple load-balanced VMs within an availability set that would make this easier?
What you do is absolutely safe and actually a recommended best practice. You should not go out to public IP address in order to communicate between the Virtual Machines.
It is also a recommended best practice to organize your Virtual Machines into Virtual Network and sub-nets.
This excellent blog post describes how can you even use static IP addresses for the VMs, so you are always 100% sure that mywebappvm1 always get XXX.XXX.XXX.XXX IP Address andyour mywebappvm2 always get YYY.YYY.YYY.YYY IP Address.
Please note that if you do not use Static IP Address assigned to the VM, it is guaranteed that the IP Address of the VM may change.
The IP for a webRole VM instance will not change for the lifetime of the deployment regardless of reboot, update or swapping. The IP will be released only if you delete the deployment, detailed here
I created 2 vms, one for centos and another one for azure, I used the same cloud service, but both have the same public IP Address, why>? can I change it?
Or they have to be in different separate cloud services?
By default, they are behind a single IP address which load balances the private IP addresses. Until recently, there was no way to get a public IP for a virtual machine.
Now, it's possible to assign a public IP to a virtual machine:
With Instance-level Public IPs for VMs, you can now assign public IP addresses to your virtual machines, so they become directly addressable without having to map an endpoint through a VIP. This feature will enable scenarios like easily running FTP servers in Azure and monitoring virtual machines directly using their IPs.
We are making this new capability available in preview form today. This feature is available only with new deployments and new virtual networks and can be enabled via PowerShell.
Typically, the load balancer is fine, but there are options if you absolutely need access to individual machines.
Since they're in the same cloud service, they're probably behind the same load balancer, and a load balancer would only have one public IP.
So, yes, I would use different cloud services as you mentioned.