Azure AKS loses Public IP Address after a couple days - azure

I have a Kubernetes (K8s) cluster on azure (AKS). I configured an ingress controller (Ngnix) to make my service available publicly. AKS relies on Azure DNS Zone to create an "A" record that maps to a Public IP Address generated by Azure. The issue is that after a couple days the IP Address disappears and the Ingress stops working.
My work around is currently to delete the following pods each time it happens:
kubectl delete pod addon-http-application-routing-external-dns-XXXX -n kube-system
kubectl delete pod addon-http-application-routing-nginx-ingress-controller-xxxxxx-n kube-system
Does anyone know why the IP gets lost each time? Is there a permanent fix to that?
Thanks!

Related

Azure Kubernetes Service - Load Balancer now showing

I installed nginx-ingress(Below Command) in AKS service and the public IP is visible in kubernetes service wizard. But unable to access public IP
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.46.0/deploy/static/provider/cloud/deploy.yaml
And when I try to watch out for Load balancer, weird thing is I do not see any load balancer in Load balancer wizard but public IP is shown in Kubernetes service.
It was working few days back and I tried re installing nginx-ingress. Now it is not working as expected. Kind of stuck here and help would be appreciated.
I'm not sure if you add the namespace when you execute the command to get the details of the services. I show the command here:
kubectl get svc --namespace ingress-nginx
And if it's the thing you did, then you need to check more things, such as the events that the services showed. Maybe the service is in the pending state or something wrong. You find certain error messages, then you'll also know what you will do.

Calico & K8S on Azure - can't access pods

I'm starting with K8S. I installed 2 Debian 10 VMs on Azure (1 master node & 2 slaves).
I installed the master node with this doc:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
I installed Calico with this one :
https://docs.projectcalico.org/getting-started/kubernetes/installation/calico#installing-with-the-kubernetes-api-datastore50-nodes-or-less
I created a simple nginx deployment:
kubectl run nginx --replicas=2 --image=nginx
I have the following pods (sazultk8s1/2 are the working nodes) :
root#itf-infra-sazultk8s0-vm:~# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-6db489d4b7-mzmnq 1/1 Running 0 12s 192.168.47.18 itf-infra-sazultk8s2-vm
nginx-6db489d4b7-sgdz7 1/1 Running 0 12s 192.168.247.115 itf-infra-sazultk8s1-vm
From the master node I can't curl to these nginx:
root#itf-infra-sazultk8s0-vm:~# curl 192.168.47.18 --connect-timeout 5
curl: (28) Connection timed out after 5001 milliseconds
root#itf-infra-sazultk8s0-vm:~# curl 192.168.247.115 --connect-timeout 5
curl: (28) Connection timed out after 5000 milliseconds
I tried from a simple busybox image:
kubectl run access --rm -ti --image busybox /bin/sh
/ #ifconfig eth0 | grep -i inet
inet addr:192.168.247.116 Bcast:0.0.0.0 Mask:255.255.255.255
/ # wget --timeout 5 192.168.247.115
Connecting to 192.168.247.115 (192.168.247.115:80)
saving to 'index.html'
index.html 100% |********************************************************************************************************| 612 0:00:00 ETA
'index.html' saved
/ # wget --timeout 5 192.168.47.18
Connecting to 192.168.47.18 (192.168.47.18:80)
wget: download timed out
From a scratch install:
does a pod can ping a pod on another host ?
is it possible to curl from master node to a pod on a worker node ?
does azure apply restrictions and prevent k8s to work properly ?
Took me 1 week to solve it.
From the master node, you want to ping/curl Pods located on worker nodes. These Pods are part of a Deployment, itself exposed through a Service.
There are some subtilities in Azure networking which make this not "working out of the box" with default Calico installation.
Steps to make Calico work on Azure
In Kubernetes, Install Calico without a networking backend.
In Azure, Enable IP forwarding on each host.
In Azure, Create UDR (user Defined Routes).
1. Kubernetes, Install Calico without a networking backend
A) Disable Bird
By default, calico.yaml is configured to use bird as a network backend, you have to set it to none.
Official installation step: https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises
Before applying -f calico.yaml, edit the file.
Search for the variable CALICO_NETWORKING_BACKEND
We see that the value is taken from a ConfigMap.
Edit the value in the ConfigMap (located at the top of the file), to set it to none instead of the default bird.
B) Remove Bird from the Readiness & Liveliness probes
Given that we have disabled Bird, it should be removed from the Readiness & Liveliness probes, otherwise, the calico-node deamonset pods won't start. In Calico Manifest, comment out "- -bird-live" and "- bird-ready".
You are done here, you can apply the file: kubectl apply -f
2. Azure, Enable IP forwarding on each host
For each VM in Azure:
Click on it > Networking > click on the Network Interface you have.
Click on IP Configurations
Set IP forwarding to Enabled.
Repeat for each VM, and you are done.
Note: as per the Azure doc, IP forwarding enables the virtual machine a network interface is attached to:
Receive network traffic not destined for one of the IP addresses assigned to any of the IP configurations assigned to the network interface.
Send network traffic with a different source IP address than the one assigned to one of a network interface's IP configurations.
3. Azure, Create UDR (User Defined Routes)
Next, you have to create UDR on your Azure subnet, so that Azure can route the traffic targeted to the (Pod subnet created by Calico on the target Host), to the (IP of the actual target Host itself). So that Azure know that the traffic aimed to that calico subnet, has to be routed to the appropriate node, otherwise Azure doesn't know what to do with this traffic.
Then, when the target node is reached, the target knows how to route the traffic to its underlying Pods.
First, identify the subnet created by Calico on each node.
kubectl get ipamblocks.crd.projectcalico.org \
-o jsonpath="{range .items[*]}{'podNetwork: '}{.spec.cidr}{'\t NodeIP: '}{.spec.affinity}{'\n'}"
On Azure, follows the documentation on how to 'Create a route Table', 'Add Routes of the table', and to 'Associate the route Table to a subnet' (just scroll the doc, sections are one below the other).
The final result should look like this:
You are done! You should now be able to ping/curl your Pods located on other nodes.
References Links
All the reference links expaining the subtilities of Azure Networking, and the different ways to use Calico with Azure (Network+NetworkPolicy, or NetworkPolicy only).
In particular, there are 3 ways to make Calico work on Azure.
The one we just see, where the routes are managed by the User. It seems that this could be called "user managed networking".
Using Azure CNI IPAM plugin. Here we could say "Azure managed networking". Azure will allocate to each Pod an IP inside the Azure subnet, so that Azure knows how to route the traffic.
Calico in VXLAN mode. Here Calico will wrap-up each paquet in another packet, the wrapper will only contain host IPs so that Azure knows how to route them. Then, when reaching the target Node, Calico unwraps the paquet to discover the real target IP, which would be a Pod IP located in the Calico subnet.
In the below documentation, there are explanations on the tradeoff of each setup, in particular the Youtube video.
Youtube (9 min), Kubernetes networking on Azure
Calico-Azure: official site and Git
Cutomizing Calico Maniest
Vocabulary:
CNI = Container network interface
IPAM = IP address management (to allocate IP addresses)
does a pod can ping a pod on another host ?
As per kubernetes networking model yes as long as you have a CNI provider installed.
is it possible to curl from master node to a pod on a worker node ?
You need to create either Nodeport or Loadbalancer type service to access your pods from outside the cluster and for accessing pods from nodes.
does azure apply restrictions and prevent k8s to work properly ?
There may be firewalls restricting traffic between VMs.

Azure Kubernetes (AKS) does not release public IPs

We are using managed Kubernetes in Azure (AKS) and have run out of public IP addresses. We only need one, but AKS creates a new public IP every time we deploy a service and it does not delete it when the service is deleted. For example:
apiVersion: v1
kind: Service
metadata:
name: somename
spec:
ports:
- port: 443
targetPort: 443
selector:
app: somename
# Also tried this to reuse public IP in AKS MC resource group
# https://learn.microsoft.com/en-my/azure/aks/static-ip
# loadBalancerIP: x.x.x.x
type: LoadBalancer
Every time this is deployed (kubectl create -f svc.yml) a new public IP is created. When it is deleted (kubectl delete -f svc.yml) the IP remains. Trying to reuse one of the existing IPs with loadBalanceIP as in the comments above fails, "Public ip address ... is referenced by multiple ipconfigs in resource ...".
We have created a service request but it takes ages, so I'm hoping this will be faster. I don't dare to just delete the public IPs in the AKS managed resource as that may cause issues down the line.
Is there a way to safely release or reuse the public IPs? We are using Kubernetes version 1.7.12. We have deleted the deployment referenced by the service as well, it makes no difference.
It should delete the IPs after some time (like 5 minutes tops). So the issue you are having is a bug. You can check k8s events to find the error and look at it.
Also, its perfectly safe to delete Azure resources. k8s wont freak out if they are gone.
Tested with k8s 1.9.1

Setup different kubernetes cluster

I created an azure kubernetes cluster via "az acs create".
I exec an existing POD
kubectl exec -it mypod /bin/bash
and make a curl http://myexternlip.com/raw
The IP I get is the public ip address of the k8s-agents... So far so good.
Now I create an azure kubernetes cluster via "acs-engine".
Make the same "exec" as above-mentioned...
Now I can't find the IP in any azure component. Neither in the agents, nor in the load balancers.
Where is this IP configured?
Regards,
saromba
This should have the information you're looking for:
https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-load-balancing
In short, when you expose a service with type=loadbalancer, ACS creates a Public IP address resource that K8S uses.

Kubernetes: access "public" urls from within a pod

First of all, I am not very expert of K8s, I understand some of the concepts and made already my hands dirty in the configurations.
I correctly set up the cluster configured by my company but I have this issue
I am working on a cluster with 2 pods, ingress rules are correctly configured for www.my-app.com and dashboard.my-app.com.
Both pods runs on the same VM.
If I enter in the dashboard pod (kubectl exec -it $POD bash) and try to curl http://www.my-app.com I land on the dashboard pod again (the same happens all the way around, from www to dashboard).
I have to use http://www-svc.default.svc.cluster.local and http://dashboard-svc.default.svc.cluster.local to land on the correct pods but this is a problem (links generated by the other app will contain internal k8s host, instead of the "public url").
Is there a way to configure routing so I can access pods with their "public" hostnames, from the pods themselves?
So what should happen when you curl is the external DNS record (www.my-app.com in this case) will resolve to your external IP address, usually a load balancer that then sends traffic to a kubernetes service. That service then should send traffic to the appropriate pod. It would seem that you have a misconfigured service. Make sure your service has an external IP that is different between dashboard and www. To see this a simple kubectl get svc should suffice. My guess is that the external IP is wrong, or the service is pointing to the wrong podm which you can see with a kubectl describe svc <name of service>.

Resources