I just created a new AKS cluster that has to replace an old cluster. The new cluster is now ready to replace the old one, except for one crucial thing, it's outbound ip address. The address of the old cluster must be used so that our existing DNS records do not have to change.
How do I change the public IP address of the Azure load balancer (that is used by the nginx ingress controller) of the new cluster to the one used by the old cluster?
The old cluster is still running, I want to switch it off / delete it when the new cluster is available. Some down time needed to switch the ip address is acceptable.
I think that the ip first has to be deleted from the load balancer's Frontend IP configuration of the old cluster and can then be added to the Frontend IP configuration of the load balancer used in the new cluster. But I need to know exactly how to do this and what else need to be done if needed (maybe adding a backend pool?)
Update
During the installation of the new cluster I already added the public ip address of the load balancer of the old cluster in the yaml of the new ingress-nginx-controller.
The nginx controller load balancer in the new cluster is in the state Pending and continuously generating events with message "Ensuring Load Balancer". Could it be that simple that I only need to assign an other ip address to the ingress-nginx-controller load balancer in the old cluster so that the ip can be used in the new cluster?
You have to create a static public IP address for the AKS cluster. Once you delete the old cluster, the public IP address and load balancer associated with it will be deleted as well. You can check and try this documentation[1] for a detailed guide.
[1] https://learn.microsoft.com/en-us/azure/aks/static-ip
I managed to assign the old ip to the new cluster. These are the steps that I followed:
Create a new static and public ip in the old cluster (nn.nn.nn.nn):
az network public-ip create --resource-group MC_rg-my-old-cluster \
--name aks-public-ip-tmp --sku Standard --allocation-method static \
--query publicIp.ipAddress -o tsv
Put the new ip in the load balancer service of the nginx controller:
spec.ports.loadBalancerIP: nn.nn.nn.nn
Move old ip address (oo.oo.oo.oo) to the resource group of the new cluster:
Find the resource group of the old cluster and open it;
Click on the public ip address that you want to move;
In the menu on the top there is a 'Move' item, select "Move to another resource group"
Select the resource group of the new cluster
After the ip is moved (can take a while) you can use it in the load balancer service of the nginx controller in the new cluster:
spec.ports.loadBalancerIP: oo.oo.oo.oo
I don't know if steps 1 and 2 are really needed.
Related
we have a basic AKS cluster setup and we need to whitelist this AKS outbound ipadress in one of our services, i scanned the AKS cluster setting in Azure portal, i was not able to find any outbound IpAddress.
how do we get the outboud IP ?
Thanks -Nen
If you are using an AKS cluster with a Standard SKU Load Balancer i.e.
$ az aks show -g $RG -n akstest --query networkProfile.loadBalancerSku -o tsv
Standard
and the outboundType is set to loadBalancer i.e.
$ az aks show -g $RG -n akstest --query networkProfile.outboundType -o tsv
loadBalancer
then you should be able to fetch the outbound IP addresses for the AKS cluster like (mind the capital IP):
$ az aks show -g $RG -n akstest --query networkProfile.loadBalancerProfile.effectiveOutboundIPs[].id
[
"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MC_xxxxxx_xxxxxx_xxxxx/providers/Microsoft.Network/publicIPAddresses/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
]
# Using $PUBLIC_IP_RESOURCE_ID obtained from the last step
$ az network public-ip show --ids $PUBLIC_IP_RESOURCE_ID --query ipAddress -o tsv
xxx.xxx.xxx.xxx
For more information please check Use a public Standard Load Balancer in Azure Kubernetes Service (AKS)
If you are using an AKS cluster with a Basic SKU Load Balancer i.e.
$ az aks show -g $RG -n akstest --query networkProfile.loadBalancerSku -o tsv
Basic
and the outboundType is set to loadBalancer i.e.
$ az aks show -g $RG -n akstest --query networkProfile.outboundType -o tsv
loadBalancer
Load Balancer Basic chooses a single frontend to be used for outbound flows when multiple (public) IP frontends are candidates for outbound flows. This selection is not configurable, and you should consider the selection algorithm to be random. This public IP address is only valid for the lifespan of that resource. If you delete the Kubernetes LoadBalancer service, the associated load balancer and IP address are also deleted. If you want to assign a specific IP address or retain an IP address for redeployed Kubernetes services, you can create and use a static public IP address, as #nico-meisenzahl mentioned.
The static IP address works only as long as you have one Service on the AKS cluster (with a Basic Load Balancer). When multiple addresses are configured on the Azure Load Balancer, any of these public IP addresses are a candidate for outbound flows, and one is selected at random. Thus every time a Service gets added, you will have to add that corresponding IP address to the whitelist which isn't very scalable. [Reference]
In the latter case, we would recommend setting outBoundType to userDefinedRouting at the time of AKS cluster creation. If userDefinedRouting is set, AKS won't automatically configure egress paths. The egress setup must be done by you.
The AKS cluster must be deployed into an existing virtual network with a subnet that has been previously configured because when not using standard load balancer (SLB) architecture, you must establish explicit egress. As such, this architecture requires explicitly sending egress traffic to an appliance like a firewall, gateway, proxy or to allow the Network Address Translation (NAT) to be done by a public IP assigned to the standard load balancer or appliance.
Load balancer creation with userDefinedRouting
AKS clusters with an outbound type of UDR receive a standard load balancer (SLB) only when the first Kubernetes service of type 'loadBalancer' is deployed. The load balancer is configured with a public IP address for inbound requests and a backend pool for inbound requests. Inbound rules are configured by the Azure cloud provider, but no outbound public IP address or outbound rules are configured as a result of having an outbound type of UDR. Your UDR will still be the only source for egress traffic.
Azure load balancers don't incur a charge until a rule is placed.
[!! Important: Using outbound type is an advanced networking scenario and requires proper network configuration.]
Here's instructions to Deploy a cluster with outbound type of UDR and Azure Firewall
You can define AKS to route egress traffic via a Load-Balancer (this is also the default behavior). This also helps you to "use" the same outgoing IP with multiple nodes.
More details are available here.
I have an VPN between the company network 172.16.0.0/16 and GCP 10.164.0.0/24
On GCP there is a cassandra cluster running with 3 instances. These instances get dynamical local ip adresses - for example 10.4.7.4 , 10.4.6.5, 10.4.3.4.
My issue: from the company network I cannot access 10.4x addresses as the tunnel works only for 10.164.0.0/24.
I tried setting up an LB service on 10.164.0.100 with the cassandra nodes behind. This doesnt work: when I configure that ip adress as seed node on local cluster, it gets an reply from one of the 10.4.x ip addresses, which it doesnt have in its seed list.
I need advice how to setup inter DC sync in this scenario.
IP addresses which K8s assign to Pods and Services are internal cluster-only addresses which are not accessible from outside of the cluster. It is possible by some CNI to create connection between in-cluster addresses and external networks, but I don't think that is a good idea in your case.
You need to expose your Cassandra using Service with NodePort or LoadBalancer type. That is another one answer with a same solution from Kubernetes Github.
If you will add a Service with type NodePort, your Cassandra will be available on a selected port on all Kubernetes nodes.
If you will choose LoadBalancer, Kubernetes will create for you Cloud Load Balancer which will be an entrypoint for Cassandra. Because you have a VPN to your VPC, I think you will need an Internal Load Balancer.
I created the GKE Private Cluster via Terraform (google_container_cluster with private = true and region set) and installed the stable/openvpn Helm Chart. My setup is basically the same as described in this article: https://itnext.io/use-helm-to-deploy-openvpn-in-kubernetes-to-access-pods-and-services-217dec344f13 and I am able to see a ClusterIP-only exposed service as described in the article. However, while I am connected to the VPN, kubectl fails due to not being able to reach the master.
I left the OVPN_NETWORK setting as the default (10.240.0.0), and changed the OVPN_K8S_POD_NETWORK and subnet mask setting to the secondary range I chose when I created my private subnet that the Private Cluster lives in.
I even tried adding 10.240.0.0/16 to my master_authorized_networks_config but I'm pretty sure that setting only works on external networks (adding the external IP of a completely different OVPN server allows me to run kubectl when I'm connected to it).
Any ideas what I'm doing wrong here?
Edit: I just remembered I had to set a value for master_ipv4_cidr_block in order to create a Private Cluster. So I added 10.0.0.0/28 to the ovpn.conf file as push "route 10.0.0.0 255.255.255.240" but that didn't help. The docs on this setting states:
Specifies a private RFC1918 block for the master's VPC. The master
range must not overlap with any subnet in your cluster's VPC. The
master and your cluster use VPC peering. Must be specified in CIDR
notation and must be /28 subnet.
but what's the implication for an OpenVPN client on a subnet outside of the cluster? How do I leverage the aforementioned VPC peering?
Figured out what the problem is: gcloud container clusters get-credentials always writes the master's external IP address to ~/.kube/config. So kubectl always talks to that external IP address instead of the internal IP.
To fix: I ran kubectl get endpoints, noted the 10.0.0.x IP and replaced the external IP in ~/.kube/config with it. Now kubectl works fine while connected to the OVPN server inside of the Kube cluster.
You can add --internal-ip to your gcloud command to put automatically the internal ip address to ~/.kube/config file
I have two Azure VMs behind the load balancer. VMs don't have any public IP, only LB has one static public IP address.
Sometimes VM gets outgoing public IP 13.93.5.128, which is not right. When I restart one VM, it gets right IP but second VM get this bad IP. It changes even without restart.
According this - https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-outbound-connections - I think I'm using Load-balanced VM (no Instance Level Public IP address on VM) (SNAT).
Trying outgoing IP with dig myip.opendns.com #resolver1.opendns.com .
How can I have outgoing IP for all VMs behind Load Balancer always the same (load balancer's one)?
This maybe over elaborate for your requirements but if your VMs are hosted using ARM (as opposed to CLASSIC) then you can reserve a public IP address for the LOAD BALANCER. If you are unhappy with the allocated address for whatever reason reserve and allocate a new one.
Example
Create a resource group
Create a virtual network inside the resource group
Create a subnet inside the virtual network
Create a public IP
Create a load balancer under the resource group
Create Front-end IP pool inside the load balancer and assign the newly created public IP to it.
Create a Backend IP pool inside the load balancer
Create rules for the load balancer
Create Inbound NAT rules inside the load balancer
Create probes for the load balancer
Create a NIC under the resource group. NIC must be under the created Resource group, Vnet, and subet.
Also, it must be attached with the backend pool from the load balancer and the inbound NAT rules from the load balancer.
Create a new VM and attach the newly created NIC
Reference
These are worth reading:
https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-get-started-internet-arm-ps
https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-arm
We have multiple background worker vmss that do not need a public IP to work.
I want to be able to connect to arbitrary vm (e.g. to troubleshoot via rdp, or to collect some snapshots using remote profiler etc).
When there's only one VMSS per load balancer all works like a charm. I've setup nat pools for each port used on VMs and all works fine.
Now, if I'm trying to add one more vmss to the same load balancer (using its own nat / backend pools) the deployment fails with
Virtual Machine /subscriptions/.../resourceGroups/.../providers/Microsoft.Compute/virtualMachines/|providers|Micr
osoft.Compute|virtualMachineScaleSets|...|virtualMachines|0 is using different Availability Set than other Virtual Machines connected to the Load Balancer(s) ...
message.
As far as I know there's no way to set up availability set for vmss. Are there any options but keeping own load balancer/public ip for each VMSS?
UPD I've found similar scheme for VM+Availability Set setup (see ILB endpoint section).
Something like this for VMSS?
Your are right, we can't change availability set for vmss.
if I'm trying to add one more vmss to the same load balancer
As we know, we can't add different availability sets to single load balancer, so we can't add one or more VMSS to the same load balancer.
Are there any options but keeping own load balancer/public ip for each
VMSS?
We have multiple background worker vmss that do not need a public IP
to work.
Are those VMss in same VNet? If yes, we can deploy a new VM in the same Vnet, we can connect to this VM, then use this VM to connect to VMSS instances with internal IP addresses, in this way, this new VM work as a jumpbox. we can use this jumpbox to troubleshoot.
Update:
Is it possible then to have multiple vmss in same VNet and assign own
public api/load balancer for each of it?
Yes, we can create a new Azure VM with public IP, then install HAproxy on it, make this VM work as a load balancer, add all VMSS instances which in the same Vnet to HAproxy backend pool, in this way, we can access this VM's public IP address + your NAT port to connect VMss instance.