How to configure Azure ContainerApps with a Static Outbound IP? - azure

In the documentation for Azure ContainerApps Ports and IP Addresses section it indicates that the
Outbound public IP
Used as the "from" IP for outbound connections that leave the virtual network. These
connections aren't routed down a VPN. Using a NAT gateway or other proxy for outbound
traffic from a Container App environment isn't supported. Outbound IPs aren't guaranteed
and may change over time.
The inbound IP for a ContainerApps Environment is fixed. Azure Container Instances (not ContainerApps) on the other hand seem to have documented capability to configure a static outbound IP via NAT Gateway.
Is there a way to configure a static outbound IP for Azure ContainerApps as well?
If not, which alternate deployment models for a long-running background service are recommended? The requirement is that an external service can count on a fixed outbound IP (or very small range, not the entire DataCenter IP ranges) for whitelisting.
** EDIT - It seems that NAT on VNet is not yet supported on ACA - https://github.com/microsoft/azure-container-apps/issues/522

way to configure a static outbound IP for Azure ContainerApps as well?
No, we can't configure outbound public IP via container apps; that information is there in the official documentation documentation itself.
try this out, Create outbound application rule on the firewall
using below command
az network firewall application-rule create
It will create an outbound rule on the firewall. This rule allows access from the subnet to Azure Container Instances.
HTTP access to the site will configure through egress IP address from Azure Container Instances.
i have found one blog refer this

Related

How to open port 22 on azure Kubernetes service for the Loopback Ip 127.0.0.1

How we should open port 22 on aks loopback IP.
We are trying to do telnet on loopback IP using port 22 which is working fine on any Linux VM but on AKS we are getting the error Connection closed.
• Note that AKS clusters have unrestricted outbound (egress) internet access. This level of network access allows nodes and services you run to access external resources as needed. If you wish to restrict egress traffic, a limited number of ports and addresses must be accessible to maintain healthy cluster maintenance tasks. The simplest solution to securing outbound addresses lies in the use of a firewall device that can control outbound traffic based on domain names. Azure Firewall, for example, can restrict outbound HTTP and HTTPS traffic based on the FQDN of the destination. You can also configure your preferred firewall and security rules to allow these required ports and addresses.
Thus, you can configure an inbound rule and an outbound rule to allow traffic on port 22, i.e., SSH for destination IP address as 127.0.0.1 (Loopback IP address). To do so, kindly refer to the documentation link below: -
https://learn.microsoft.com/en-us/azure/aks/limit-egress-traffic#adding-firewall-rules
According to the above link, you must deploy a firewall and create a UDR hop to Azure firewall and associate it to AKS. Thus, in this way, if you configure the Azure firewall with the AKS cluster, you will be able to control the ingress and egress port traffic.

How to allow outbound traffic on internal load balancer

I have several machines in a backend pool associated with an internal load balancer. However, they currently do not have outbound access. The documentation seems to indicate that I should be able to create a public load balancer and attach the same backend pool with it so that I can have outbound access from those machines. However, when I create a public load balancer, I don't have the option of associating it with an existing pool, and when I try to create a new backend pool for the public LB I can't associate those machines with it. Neither machine has a public IP address. From the dashboard it shows:
where all the interesting info is cut off. What am I missing?
Even VM's in the backend pool of an ILB should have a default outbound IP. If you don't have outbound access have you checked the security group rules to make sure outbound traffic is allowed?
I'm afraid you can't do this on the same LB for both inbound & outbound traffic.
If you happen to use the Basic SKU, VMs behind the LB have internet
access as outbound connections are NAT'ed by Azure. But, all VMs have to be in the same AZ. This wasn't a great idea & we declined it
If you use a Standard SKU, outbound connections to the internet are not possible. We learned this after many failed & painful attempts. More details here
As discussed in the above link, attaching a public IP to each VM nic isn't a good idea either
What worked for us is to create another LoadBalancer specifically for outbound connections, attach public IP to that LB & configure outbound rules. More details here

Restrict inbound traffic to only come through Azure Load Balancer

Please can someone advise how to restrict access on port 80/443 to some Azure VMs, so that they can only be access via the public IP Address that is associated to an Azure Load Balancer.
Our current setup has load balancing rules passing through traffic from public IP on 80=>80 and 443=>443, to back end pool of 2 VMs. We have health probe setup on port 80. Session persistence is set to client IP and floating IP is disabled.
I thought the answer was to deny access (via Network Security Group) to internet (service tag) on 80/443. Then add rule to allow service tag (AzureLoadBalancer) on the same ports. But that didnt seem to have an effect. Having read up a little more on this, it seems the AzureLoadBalancer tag is only to allow the health probe access and not specifically inbound traffic from that load balancer.
I have also tried adding rules to allow the public IP address of the load balancer, but again no effect.
I was wondering if I need to start looking into Azure Firewalls? and somehow restrict access
to inbound traffic that comes through that?
The only way I can get the VMs to respond on those ports is to add rules to allowing 80/443 from any to any....
After reading your question, my understanding is that you have a Public load balancer and the backend VMs also have instance level Public IPs associated with them and hence direct inbound access to the VMs is possible. But you would like to make sure that the direct inbound access to VMs is restricted only via the load balancer.
The simple solution for you to achieve this is by disassociating the instance level public IP of the VMs, this will make the LB public IP as the only point of contact for your VMs.
Keep in mind that the LB is not a proxy, it is just a layer 4 resource to forward traffic, therefore, your backend VM will still see source IP of the clients and not the LB IP, hence, you will still need to allow the traffic at the NSGs level using as source "Any".
However, if your requirement is to enable outbound connectivity from Azure VMs while avoiding SNAT exhaustion, I would advise you to create NAT Gateway, where you can assign multiple Public IP address for SNAT and remove the Public IP from the VM. This setup will make sure that the inbound access is provided by the Public load balancer only and the outbound access is provided by the NAT gateway as shown below:
Refer : https://learn.microsoft.com/en-us/azure/virtual-network/nat-gateway/nat-gateway-resource#nat-and-vm-with-standard-public-load-balancer
https://learn.microsoft.com/en-us/azure/virtual-network/nat-gateway/tutorial-nat-gateway-load-balancer-public-portal
You could also configure port forwarding in Azure Load Balancer for the RDP/SSH connections to individual instances.
Refer : https://learn.microsoft.com/en-us/azure/load-balancer/manage#-add-an-inbound-nat-rule
https://learn.microsoft.com/en-us/azure/load-balancer/tutorial-load-balancer-port-forwarding-portal

Azure Internal ASE with Firewall

I am running a Linux container as a web app in an internal ASE.
The ASE is deployed to a Vnet (secondary Vnet) which is peered to a another Vnet(Primary vnet) where an Azure firewall exists.
1.I have Enable service endpoints to SQL, Storage, and Event Hub on your ASE subnet.
2.From the Azure Firewall UI > Rules > Application rule collection, Set App Service Environment FQDN Tag and the Windows Update Tag.
3.From the Azure Firewall UI > Rules > Network rule collection, Set the ports to 123.Create another rule the same way to port 12000 to help triage any system issues.
4.Create a route table with the management addresses from App Service Environment management addresses with a next hop of Internet, set 0.0.0.0/0 directed to the network appliance ( Firewall internal IP address)
5.Create Application rules to allow HTTP/HTTPS traffic (Note: address is the IP of the ILB of the Internal ASE, since I cant find an IP for the web app itself)
I don't seem to be able to reach the web app. Any guidance will be appreciated. is the problem that I created an Internal ASE?
I am trying to isolate the ISE and control external access to it via a firewall.
MSDocs I referenced :https://learn.microsoft.com/en-us/azure/app-service/environment/firewall-integration
Yes, I think it's the problem with internal ASE. Also, the referring document is intended to lock down all egress from the ASE VNet. Inbound management traffic for an ASE can not be sent through a firewall device.
There are a number of inbound dependencies that an ASE has. The
inbound management traffic cannot be sent through a firewall device.
The source addresses for this traffic are known and are published in
the App Service Environment management addresses document. You can
create Network Security Group rules with that information to secure
inbound traffic.
In addition, since it's an internal ASE, it is deployed in your VNet with ILB. You can not directly access its backend web app over the Internet, you need at least a public-facing Ip address (external VIP )or other public-facing services(Public Azure application gateway) in front of it.
It will like this,

Control outbound IP address of internal VMSS in Azure

I have a VMSS/svc fabric cluster on internal vnet (not public). The only inbound connections to the VMSS is from on prem through a Azure VPN Gateway.
How do I control the outbound IP address the VMSS go through when accessing the internet? In this case I do not want this traffic routed through a random IP address or through the VPN connection.
Basically I want to secure my Azure SQL so that the outbound internet IPs of the VMSS is whitelisted. And I don't want to add all Azure datacenter IPs.
You could look to use Forced Tunneling which would ensure that your control where the data egress occurs in your on-premises environment, however this would force any data in your Virtual Network back over your VPN connection which may not be desirable (or helpful if you don't control egress from there).
Failing this you could add a software-based firewall running on an Azure VM with a public IP onto the same VNet and then use User Defined Routes (UDRs) to force all traffic bound for the Internet to go via that and then use the public IP address in your SQL firewall.
Longer term you will be able to connect Azure SQL DB to VNets (or at least restrict access to it from one) - see the Uservoice site (and add your vote!)

Resources