Determine IP address/es of Azure Container Instances - azure

Is there way to determine outbound IPs specific to Azure Container Instances?
Background:
I would like to allow my container instance to send network messages to service behind firewall. To configure this firewall I need to know outbound IP address or range of IPs.
I found list of IPs for my region here https://www.microsoft.com/en-us/download/details.aspx?id=56519 but it's for all services (for my region it's more than 180 entries) not only container instances.

You can have container infos by executing this "Azure CLI" command
az container show --resource-group "RgName" --name "containerName" --output table

You may be able to use Private IP, VNet deployment feature (in preview currently) of ACI to support this.
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-vnet
You can use the CIDR range of the subnet to configure your firewall.
HTH

Related

how to get hold of the azure kubernetes cluster outbound ip address

we have a basic AKS cluster setup and we need to whitelist this AKS outbound ipadress in one of our services, i scanned the AKS cluster setting in Azure portal, i was not able to find any outbound IpAddress.
how do we get the outboud IP ?
Thanks -Nen
If you are using an AKS cluster with a Standard SKU Load Balancer i.e.
$ az aks show -g $RG -n akstest --query networkProfile.loadBalancerSku -o tsv
Standard
and the outboundType is set to loadBalancer i.e.
$ az aks show -g $RG -n akstest --query networkProfile.outboundType -o tsv
loadBalancer
then you should be able to fetch the outbound IP addresses for the AKS cluster like (mind the capital IP):
$ az aks show -g $RG -n akstest --query networkProfile.loadBalancerProfile.effectiveOutboundIPs[].id
[
"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MC_xxxxxx_xxxxxx_xxxxx/providers/Microsoft.Network/publicIPAddresses/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
]
# Using $PUBLIC_IP_RESOURCE_ID obtained from the last step
$ az network public-ip show --ids $PUBLIC_IP_RESOURCE_ID --query ipAddress -o tsv
xxx.xxx.xxx.xxx
For more information please check Use a public Standard Load Balancer in Azure Kubernetes Service (AKS)
If you are using an AKS cluster with a Basic SKU Load Balancer i.e.
$ az aks show -g $RG -n akstest --query networkProfile.loadBalancerSku -o tsv
Basic
and the outboundType is set to loadBalancer i.e.
$ az aks show -g $RG -n akstest --query networkProfile.outboundType -o tsv
loadBalancer
Load Balancer Basic chooses a single frontend to be used for outbound flows when multiple (public) IP frontends are candidates for outbound flows. This selection is not configurable, and you should consider the selection algorithm to be random. This public IP address is only valid for the lifespan of that resource. If you delete the Kubernetes LoadBalancer service, the associated load balancer and IP address are also deleted. If you want to assign a specific IP address or retain an IP address for redeployed Kubernetes services, you can create and use a static public IP address, as #nico-meisenzahl mentioned.
The static IP address works only as long as you have one Service on the AKS cluster (with a Basic Load Balancer). When multiple addresses are configured on the Azure Load Balancer, any of these public IP addresses are a candidate for outbound flows, and one is selected at random. Thus every time a Service gets added, you will have to add that corresponding IP address to the whitelist which isn't very scalable. [Reference]
In the latter case, we would recommend setting outBoundType to userDefinedRouting at the time of AKS cluster creation. If userDefinedRouting is set, AKS won't automatically configure egress paths. The egress setup must be done by you.
The AKS cluster must be deployed into an existing virtual network with a subnet that has been previously configured because when not using standard load balancer (SLB) architecture, you must establish explicit egress. As such, this architecture requires explicitly sending egress traffic to an appliance like a firewall, gateway, proxy or to allow the Network Address Translation (NAT) to be done by a public IP assigned to the standard load balancer or appliance.
Load balancer creation with userDefinedRouting
AKS clusters with an outbound type of UDR receive a standard load balancer (SLB) only when the first Kubernetes service of type 'loadBalancer' is deployed. The load balancer is configured with a public IP address for inbound requests and a backend pool for inbound requests. Inbound rules are configured by the Azure cloud provider, but no outbound public IP address or outbound rules are configured as a result of having an outbound type of UDR. Your UDR will still be the only source for egress traffic.
Azure load balancers don't incur a charge until a rule is placed.
[!! Important: Using outbound type is an advanced networking scenario and requires proper network configuration.]
Here's instructions to Deploy a cluster with outbound type of UDR and Azure Firewall
You can define AKS to route egress traffic via a Load-Balancer (this is also the default behavior). This also helps you to "use" the same outgoing IP with multiple nodes.
More details are available here.

cannot access ACI restful endpoint deployed to VN

I deployed a docker image to an ACR and then to an ACI with a command like this:
az container create
--resource-group myrg
--name myamazingacr
--image myamazingacr.azurecr.io/test3:v1
--cpu 1
--memory 1
--vnet myrg-vnet
--vnet-address-prefix 10.0.0.0/16
--subnet default
--subnet-address-prefix 10.0.0.0/24
--registry-login-server myamazingacr.azurecr.io
--registry-username xxx
--registry-password xxx
--ports 80
This all works without error and the IP of the ACI is 10.0.0.5 and there is no FQDN as it is a VN. I think this makes sense.
When I run the image outside Azure (i.e. on my local machine where I created the image) I can successfully access an endpoint like this:
http://127.0.0.1/plot
http://127.0.0.1/predict_petal_length?petal_width=3
[127.0.0.1] indicates that I run the image on the local machine.
However, this does not work:
http://10.0.0.5/plot
http://10.0.0.5/predict_petal_length?petal_width=3
I get:
This site can’t be reached10.0.0.5 took too long to respond.
What could be wrong please?
PS:
Maybe it is related to this:
https://learn.microsoft.com/en-us/answers/questions/299493/azure-container-instance-does-not-resolve-name-wit.html
I have to say I find Azure really frustrating to work with. Nothing really seems to work. Starting with Azure ML to ACIs ...
PPS:
this is what our IT says - tbh I do not fully understand ...
• Private endpoints are not supported so we need to create a vnet in the resource group peer it to the current dev vnet and we should be good
• We basically need to know how we can create an ACR with the network in an existing vnet in a different resource group. I am struggling to find the correct way to do this.
Since you have deployed your ACI into an Azure virtual network, your containers can communicate securely with other resources in the virtual network. So you could access the ACI endpoint in the Azure vNet.
For example, you can try to deploy a VM in the vNet but in a different subnet than your ACI, then you can try to access the ACI endpoint from the Azure VM.
Alternatively, you can expose a static IP address for a container group by using an application gateway with a public frontend IP address.
The possible reason for your issue is that you set the wrong IP address for your application to listen to. The IP address 127.0.0.1 is a localhost or loopback IP that only can be used inside the machine. Take a look here. So you can try to change the IP into 0.0.0.0. This one is accessible outside.

Integrating App Service with NAT gateway to get static outbound IP

Firstly, I integrate VNET with Azure App Service
In order to route traffic to VNet, I add WEBSITE_VNET_ROUTE_ALL with value 1 in App service settings.
I created NATgateway and attached it to the subnet.
I also created a route and attached it to the subnet in that route, I gave the address prefix as VNET address space and for the Next hop type I selected virtual appliance and in Next hop address I add NAT gateway public IP.
Note: I used the below link for reference:
https://sakaldeep.com.np/1159/azure-nat-gateway-and-web-app-vnet-integration-to-get-static-outbound-ip
after doing all above steps and I checked with below command I didn't get NAT gateway IP as result
az webapp show --resource-group <group_name> --name <app_name> --query outboundIpAddresses --output tsv
Azure App Service is a multi-tenant service. All App Service plans in the same deployment unit, and app instances that run in them, share the same set of virtual IP addresses. When you run
az webapp show --resource-group <group_name> --name <app_name> --query outboundIpAddresses --output tsv
you just get the Outbound IP Addresses Properties of your web app. You can find all possible outbound IP addresses for your app, regardless of pricing tiers, click Properties in your app's left-hand navigation. They are listed in the Additional Outbound IP Addresses field. The above outbound IP addresses will not change.
But if you send a request from your web app within a VNet over the internet, you should find the NAT gateway IP as the source.
For example, you could try to find the public IP from SSH (Linux app service) with
the command.
curl ipinfo.io/ip

Azure Container Instance - dns and subnet in the same container

I have an Azure Container Instance with subnet configuration (I need to access an internal service). But I also need to configure dns.
I try to create the Container, but it returns this message: The IP address type can not be public when the network profile is set.
Is it possible to configure dns and configure the subnet in the same container?
Unfortunately, if you deploy the Azure Container Instances in the Subnet of a Vnet, then you cannot set the public IPs or DNS for it. Azure does not support it, at least now. Maybe it will be supported in the future. For more details, see Virtual network deployment limitations.
Container groups deployed to a virtual network do not currently
support public IP addresses or DNS name labels.
Hope this will help you.
The error with the network profile looks like a bug in the az
command tool. If you just specify your VNET name and subnet name
then it will create a network profile name.
If you want to use DNS
to resolve these names you'll need to setup DNS separately, and call
an additional az command to configure the DNS after you create the
container instance.
az network dns record-set a add-record ...
See this doc for using Azure DNS with private IP addresses.
Use Azure DNS for private domains

Is It Possible to Restrict Access to Azure Container Instance with IP restrictions

I am creating an Azure container instance to host an index for testing purposes. Currently I can only get it to work with IpAddressType set as Public, but of course this makes the index available to the world.
Is it possible to secure an Azure container instance with IP restrictions, preferably using PowerShell?
When I configure the container image with IpAddressType set as Private, I am unable to access the index.
Below is the command I am using to create the container instance:
New-AzureRmContainerGroup -ResourceGroupName $resourceGroup `
-Name indexcontainer `
-Image $image `
-IpAddressType Public `
-Location $resourceGroupLocation `
-MemoryInGB 6 `
-Cpu 2 `
-Port 9200
TODAY:
Not with Container Groups, if you open up a port on the container group, it is public to the world.
Container-Group is the little brother (mini version) of full-on AKS.
AKS, the big brother, gives you more control.
See : https://learn.microsoft.com/en-us/azure/aks/internal-lb
-IpAddressType Accepted values: Public
https://learn.microsoft.com/en-us/powershell/module/azurerm.containerinstance/new-azurermcontainergroup?view=azurermps-6.13.0
Note, the only value accepted in documentation is "Public"
However, they put the placeholder in for future arguments besides "Public"...so I think they see this as a gap in functionality........
As mentioned in the above comment, you can expose them to VNET now (in Preview)
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-vnet
Once connected to a VNET you can use Network Security Groups to only allow traffic from allowed IPs or networks. The route you are currently taking will not work.
Seems like no, at least natively with Azure Container Instance.
There are two options to deploy Azure Container Instances:
publicIP - you can't restrict access to this type of deployment.
Custom VNet - you can apply restrictions with the network security groups (NSG), but Azure Container Instances doesn't support exposing containers publicly in this case.
See documentation:
Unsupported networking scenarios:
Public IP or DNS label - Container groups deployed to a virtual network don't currently support exposing containers directly to the internet with a public IP address or a fully qualified domain name
As an option, you can try to do the following (it supports restrictions for HTTP/HTTPS traffic only):
Put the Application Gateway before the ACI deployed in custom VNet to expose containers publicly (you can find some examples, like this one)
Add IP whitelisting restrictions to NSG in custom VNet for ACI.

Resources