how to get hold of the azure kubernetes cluster outbound ip address - azure

we have a basic AKS cluster setup and we need to whitelist this AKS outbound ipadress in one of our services, i scanned the AKS cluster setting in Azure portal, i was not able to find any outbound IpAddress.
how do we get the outboud IP ?
Thanks -Nen

If you are using an AKS cluster with a Standard SKU Load Balancer i.e.
$ az aks show -g $RG -n akstest --query networkProfile.loadBalancerSku -o tsv
Standard
and the outboundType is set to loadBalancer i.e.
$ az aks show -g $RG -n akstest --query networkProfile.outboundType -o tsv
loadBalancer
then you should be able to fetch the outbound IP addresses for the AKS cluster like (mind the capital IP):
$ az aks show -g $RG -n akstest --query networkProfile.loadBalancerProfile.effectiveOutboundIPs[].id
[
"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MC_xxxxxx_xxxxxx_xxxxx/providers/Microsoft.Network/publicIPAddresses/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
]
# Using $PUBLIC_IP_RESOURCE_ID obtained from the last step
$ az network public-ip show --ids $PUBLIC_IP_RESOURCE_ID --query ipAddress -o tsv
xxx.xxx.xxx.xxx
For more information please check Use a public Standard Load Balancer in Azure Kubernetes Service (AKS)
If you are using an AKS cluster with a Basic SKU Load Balancer i.e.
$ az aks show -g $RG -n akstest --query networkProfile.loadBalancerSku -o tsv
Basic
and the outboundType is set to loadBalancer i.e.
$ az aks show -g $RG -n akstest --query networkProfile.outboundType -o tsv
loadBalancer
Load Balancer Basic chooses a single frontend to be used for outbound flows when multiple (public) IP frontends are candidates for outbound flows. This selection is not configurable, and you should consider the selection algorithm to be random. This public IP address is only valid for the lifespan of that resource. If you delete the Kubernetes LoadBalancer service, the associated load balancer and IP address are also deleted. If you want to assign a specific IP address or retain an IP address for redeployed Kubernetes services, you can create and use a static public IP address, as #nico-meisenzahl mentioned.
The static IP address works only as long as you have one Service on the AKS cluster (with a Basic Load Balancer). When multiple addresses are configured on the Azure Load Balancer, any of these public IP addresses are a candidate for outbound flows, and one is selected at random. Thus every time a Service gets added, you will have to add that corresponding IP address to the whitelist which isn't very scalable. [Reference]
In the latter case, we would recommend setting outBoundType to userDefinedRouting at the time of AKS cluster creation. If userDefinedRouting is set, AKS won't automatically configure egress paths. The egress setup must be done by you.
The AKS cluster must be deployed into an existing virtual network with a subnet that has been previously configured because when not using standard load balancer (SLB) architecture, you must establish explicit egress. As such, this architecture requires explicitly sending egress traffic to an appliance like a firewall, gateway, proxy or to allow the Network Address Translation (NAT) to be done by a public IP assigned to the standard load balancer or appliance.
Load balancer creation with userDefinedRouting
AKS clusters with an outbound type of UDR receive a standard load balancer (SLB) only when the first Kubernetes service of type 'loadBalancer' is deployed. The load balancer is configured with a public IP address for inbound requests and a backend pool for inbound requests. Inbound rules are configured by the Azure cloud provider, but no outbound public IP address or outbound rules are configured as a result of having an outbound type of UDR. Your UDR will still be the only source for egress traffic.
Azure load balancers don't incur a charge until a rule is placed.
[!! Important: Using outbound type is an advanced networking scenario and requires proper network configuration.]
Here's instructions to Deploy a cluster with outbound type of UDR and Azure Firewall

You can define AKS to route egress traffic via a Load-Balancer (this is also the default behavior). This also helps you to "use" the same outgoing IP with multiple nodes.
More details are available here.

Related

Integrating App Service with NAT gateway to get static outbound IP

Firstly, I integrate VNET with Azure App Service
In order to route traffic to VNet, I add WEBSITE_VNET_ROUTE_ALL with value 1 in App service settings.
I created NATgateway and attached it to the subnet.
I also created a route and attached it to the subnet in that route, I gave the address prefix as VNET address space and for the Next hop type I selected virtual appliance and in Next hop address I add NAT gateway public IP.
Note: I used the below link for reference:
https://sakaldeep.com.np/1159/azure-nat-gateway-and-web-app-vnet-integration-to-get-static-outbound-ip
after doing all above steps and I checked with below command I didn't get NAT gateway IP as result
az webapp show --resource-group <group_name> --name <app_name> --query outboundIpAddresses --output tsv
Azure App Service is a multi-tenant service. All App Service plans in the same deployment unit, and app instances that run in them, share the same set of virtual IP addresses. When you run
az webapp show --resource-group <group_name> --name <app_name> --query outboundIpAddresses --output tsv
you just get the Outbound IP Addresses Properties of your web app. You can find all possible outbound IP addresses for your app, regardless of pricing tiers, click Properties in your app's left-hand navigation. They are listed in the Additional Outbound IP Addresses field. The above outbound IP addresses will not change.
But if you send a request from your web app within a VNet over the internet, you should find the NAT gateway IP as the source.
For example, you could try to find the public IP from SSH (Linux app service) with
the command.
curl ipinfo.io/ip

Access kubernetes services behind IKEv2 VPN (strongswan) on AKS

I am trying to establish an IKEv2 VPN between one VM(subnet: 20.25.0.0/16) and one AKS cluster(subnet: 10.0.0.0/16 - Azure CNI) using strongswan gateway. I need to access some kubernetes services behind of this AKS cluster. With Azure CNI each pod will be assigned an IP address from the POD subnets specified at cluster creation, this subnet is attached in interface eth0 for each node. Already kubernetes services of the type clusterIP will get an IP from service CIDR range specified at cluster creation, but this IP is only available in the cluster is not attached in any interface of the nodes, like POD subnet.
To run the strongswan on K8S it's necessary to mount the kernel modules(/lib/modules), in addition to enable NET_ADMIN capabilities. So the VPN tunnel is established using any of the networks attached on the host(nodes) interface, so I can't established a VPN using service CIDR range specified at cluster creation, since this IPs is known only within the cluster, through personalized routes and is not attached on any host interface. If I try to configure the VPN established with a subnet with the CIDR range of services informed in the creation of the cluster, I get an error stating that the subnet was not found in any of the interfaces.
To get around this, I realized that I can configure a tunnel informing a subnet with a larger range, as long as there is a subnet attached in my interface that is within the wider informed range. For example, I can configure a VPN informing the subnet 10.0.0.0/16, but my subnet for pods and nodes (attached in eth0) is 10.0.0.0/17 and CIDR range for services is 10.0.128.0/17, in this way all traffic 10.0.0.0/16 is routed through the vpn tunnel. In this way, as a workaround I define my services CIDR as a network subsequent to the network of pods and nodes and configure the VPN using a network that overlaps the two.
All 10.0.0.0/16 traffic from one side of the VPN (VM) is correctly routed to inside tunnel. If I try to access a Pod directly, using any IP from the Pods subnet (10.0.0.0/17), everything works fine. The issue is if I try to access a kubernetes service using a IP from CIDR for services(10.0.128.0/17), the traffic is not routed correctly until the K8S services. I can see the request in tcpdump in AKS, but it doesn't arrive in the service. So my question is, how to make a configuration on the strongswan, in which I can access the services on the aks kubernetes cluster?
Below is the current configuration of the strongswan:
PEER-1(VM)
conn %default
authby="secret"
closeaction="restart"
dpdaction="restart"
dpddelay="5"
dpdtimeout="10"
esp="aes256-sha1-modp1536"
ike="aes256-sha1-modp1024"
ikelifetime="1440m"
keyexchange="ikev2"
keyingtries="1"
keylife="60m"
mobike="no"
conn PEER-1
auto=add
leftid=<LEFT-PHASE-1-IP>
left=%any
leftsubnet=20.25.0.0/16
leftfirewall=yes
leftauth=psk
rightid=<RIGHT-PHASE-1-IP>
right=<RIGHT-PHASE-1-IP>
rightsubnet=10.0.0.0/16
rightfirewall=yes
rightauth=psk
PEER-2(AKS)
conn %default
authby="secret"
closeaction="restart"
dpdaction="restart"
dpddelay="5"
dpdtimeout="10"
esp="aes256-sha1-modp1536"
ike="aes256-sha1-modp1024"
ikelifetime="1440m"
keyexchange="ikev2"
keyingtries="1"
keylife="60m"
mobike="no"
conn PEER-2
auto=start
leftid=<LEFT-PHASE-1-IP>
left=%any
leftsubnet=10.0.0.0/16
leftfirewall=yes
leftauth=psk
rightid=<RIGHT-PHASE-1-IP>
right=<RIGHT-PHASE-1-IP>
rightsubnet=20.25.0.0/16
rightfirewall=yes
rightauth=psk

Can I direct an IP address to a specific Azure VM scale set, if more than one scale set is running under a load balancer?

I have 2 different Virtual machine scale sets running in Azure. They're both in the same resource group. I have a single Azure Load balancer for kubernetes, and it has a backend pool that contains all of the VMs from both scale sets.
I have 2 different 'Public IP addresses' set up in Azure. I want one of these IPs to point to the 1st virtual machine scale set, and the 2nd to point to the 2nd virtual machines scale set (preferably without having to specify the VMs to connect to - I'd prefer the IPs to point to the scale set somehow, not the individual VMs, if possible).
For both virtual machine scale sets, the 'Networking' section in Azure is as follows (It's showing an inbound port rule for the 1st IP, but the 2nd IP isn't currently present at all on the inbound port rules:
Under the Load balancer > Inbound NAT rules section, I looked at adding a new rule, but the 'Target virtual machine' dropdown on the 'Add inbound NAT rule' page doesn't show any options (and in any case, I'd prefer to target the IP to the VM scale set, if possible):
I looked at the following questions, but they don't address my scenario. Is it possible to direct my 1st IP to the 1st VM scale set, and the 2nd IP to the 2nd VM scale set? If not, can I direct my 2nd IP to one of the VMs in the 2nd scale set, using the same load balancer? And how would I achieve either of these two approaches?
Load balance between two Azure Virtual Machine Scale Set (VMSS)
Azure VM: More than one Public IP
I also have met this issue and I think it's a problem that needs to be fixed. You cannot solve it in the Azure portal. I solve it via the CLI command and here are the steps:
create an inbound nat pool:
az network lb inbound-nat-pool create --backend-port 22 --frontend-port-range-end 50010 --frontend-port-range-start 50000 --lb-name lbName --name natpoolName --protocol Tcp -g resourceGroupName
find the inbound nat pool resource id:
az network lb show -g resourceGroupName -n lbName --query inboundNatPools
add the inbound nat pool to the vmss:
az vmss update -g charles -n azurevmss --add virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].loadBalancerInboundNatPools id=inboundNatPoolId
upgrade the vmss instances in the Azure portal
Then you can see all the VMSS instances are already in the Load Balancer inbound NAT pool. You can also create the inbound Nat Pool with a special frontend IP as you want.

Securing Kubernetes API on Azure only accessible by Local IP (RFC1918)

Notice that when I create a Azure Kubernetes, by default it creates the API with an *.azmk8s.io FQDN and it is external facing.
Is there a way to create with local IP instead? If "yes", can this be protected by NSG and Virtual Network to limit connections coming via Jump Server?
If there is any drawback creating to only allow internal IP?
Below is the command I used to create:-
az aks create -g [resourceGroup] -n
[ClusterName]  --windows-admin-password [SomePassword]
--windows-admin-username [SomeUserName] --location [Location] --generate-ssh-keys -c 1 --enable-vmss --kubernetes-version 1.14.8 --network-plugin azure
Anyone tried https://learn.microsoft.com/en-us/azure/aks/private-clusters? If that still allows external facing app but private management API?
why not? only the control plane endpoint is different. in all other regards - its a regular AKS cluster.
In a private cluster, the control plane or API server has internal IP
addresses that are defined in the RFC1918 - Address Allocation for
Private Internets document. By using a private cluster, you can ensure
that network traffic between your API server and your node pools
remains on the private network only.
this outlines how to connect to private cluster with kubectl. NSG should work as usual

Determine IP address/es of Azure Container Instances

Is there way to determine outbound IPs specific to Azure Container Instances?
Background:
I would like to allow my container instance to send network messages to service behind firewall. To configure this firewall I need to know outbound IP address or range of IPs.
I found list of IPs for my region here https://www.microsoft.com/en-us/download/details.aspx?id=56519 but it's for all services (for my region it's more than 180 entries) not only container instances.
You can have container infos by executing this "Azure CLI" command
az container show --resource-group "RgName" --name "containerName" --output table
You may be able to use Private IP, VNet deployment feature (in preview currently) of ACI to support this.
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-vnet
You can use the CIDR range of the subnet to configure your firewall.
HTH

Resources