I installed nginx ingress controller on AKS cluster.But not able to access ingress endpoints that are exposed by our app.As per the initial analysis we see that ingress endpoints have been assigned external IP of one of the node where as ingress controller service has different IP.
What I am doing wrong ?
$kubectl get pods --all-namespaces | grep ingress
kube-system ingress-nginx-58ftggg-4xc56 1/1 Running
$kubectl get svc
kubernetes CLUSTERIP 172.16.0.1 none(ExternalIP) 443/TCP
$kubectl get ingress
vault-ingress-documentation 10.145.13.456
$kubectl describe ingress vault-ingress-documentation
Name:vault-ingress-documentation
Namespace:corebanking
Address:10.145.13.456
Default backend:default-http-backend:80 (<error:default-http-backend:80 not found)
$kubectl get services -n kube-system | grep ingress
ingress-nginx Loadbalancer 172.16.160.33 10.145.13.456 80:30389/TCP,443:31812/TCP
I tried to reproduce the same in my environment and got below results:
I have created deployment so it will create replica sets and pods for the particular nodes and we can see the pods are up and running like below:
Kubectl create -f deployment.yaml
I have created the deployment for the service file it will access inside of the cluster IP:
Kubectl create -f service.yaml
To expose the external IP of the application, I created the ingress rules using Ingress.yaml file.
I have added the annotation class for the ingress.yaml file like below:
annotations:
kubernetes.io/ingress.class: nginx
Here ingress rules will be created with empty address like below:
When I try to access the application, I am not able to access. To access the application, add loadBalancer IP with Domain name in /etc/host path.
Now I am able to connect the application with service IP, but I am not able to expose to the external IP.
To expose to the external IP, I added annotation class like below:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
After that I have applied the changes in the ingressrules.yaml file:
kubectl replace –force -f ingress_rules.yaml
OR
Kubectl create -f ingress_rules.yaml
Now I am able to see the address of the IP, by using this I am able to access the application.
Related
I have an AKS cluster configured with an ingress-nginx internal ingress controller of class nginx-internal. This creates an internal LB with a private IP. We then create a few ingress objects using the ingress class nginx-internal. These ingress objects gets assigned the ILBs private IP(external IP). So far so good.
Now, we upgraded ingress-nginx internal ingress controller(to version v1.2.0 from 0.49.0 as we had to upgrade to k8s v1.22.6) and this potentially caused the ILBs IP address to change. To our surprise, the ingress objects still have the old IPs assigned and not the new ones.
I would have thought the ingress controller would have figured this out and would have updated the IP addresses on the all ingress objects that it tracks.
Any help/explanations on what may have gone wrong?
The recommended way for ingress-nginx on the new version is to use Helm. This would ensure the new IPs would be used.
NAMESPACE=ingress-basic
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
--create-namespace \
--namespace $NAMESPACE \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
Check the Azure docs here https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli
How does AKS make not ready pod unavailable to accept requests into it? It only works if you have a service in front of that deployment correct?
I'd like to start this off by trying to explain what I had noticed in aks that is not configured with azure cni and then go on to explain what I have been seeing in aks with azure cni enabled.
In AKS without cni enabled if I execute a curl on url on a not ready pod behind a service like this curl -I some-pod.some-service.some-namespace.svc.cluster.local:8080 what I get in the response is unresolvable hostname or something like that. Which means in my understanding that DNS doesn't have this entry. This is how in normal way aks handles not ready pods to not receives requests.
In AKS with azure cni enabled if I execute the same request on a not ready pod it is able to resolve the hostname and able to send request into the pod. However, there's one caveat is that when I try to execute a request through external private ip of that service that request doesn't reach the not ready pod which that is to be expected and seems to work right. But again when I try to execute a request like I mentioned above curl -I some-pod.some-service.some-namespace.svc.cluster.local:8080 that works but it shouldn't. Why does DNS in the case of azure cni have that value?
Is there anything I can do to configure azure cni to behave more like a default behavior of AKS where a curl request like that either will not resolve that hostname or will refuse the connection or something?
Assuming that not ready pod refer to pods with Readiness Probe failing. The kubelet uses readiness probes to know when a container is ready to start accepting traffic. A Pod is considered ready when all of its containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers. [Reference]
However, the logic determining the readiness of the pod might or might not have anything to do with whether the pod can serve requests and depends completely on the user.
For instance with a Pod having the following manifest:
apiVersion: v1
kind: Pod
metadata:
labels:
test: readiness
name: readiness-pod
spec:
containers:
- name: readiness-container
image: nginx
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
readiness is decided based on the existence of the file /tmp/healthy irrespective of whether nginx serves the application. So on running the application and exposing it using a service readiness-svc on k run -:
kubectl exec readiness-pod -- /bin/bash -c 'if [ -f /tmp/healthy ]; then echo "/tmp/healthy file is present";else echo "/tmp/healthy file is absent";fi'
/tmp/healthy file is absent
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
readiness-pod 0/1 Running 0 11m 10.240.0.28 aks-nodepool1-29819654-vmss000000 <none> <none>
source-pod 1/1 Running 0 6h8m 10.240.0.27 aks-nodepool1-29819654-vmss000000 <none> <none>
kubectl describe svc readiness-svc
Name: readiness-svc
Namespace: default
Labels: test=readiness
Annotations: <none>
Selector: test=readiness
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.0.23.194
IPs: 10.0.23.194
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints:
Session Affinity: None
Events: <none>
kubectl exec -it source-pod -- bash
root#source-pod:/# curl -I readiness-svc.default.svc.cluster.local:80
curl: (7) Failed to connect to readiness-svc.default.svc.cluster.local port 80: Connection refused
root#source-pod:/# curl -I 10-240-0-28.default.pod.cluster.local:80
HTTP/1.1 200 OK
Server: nginx/1.21.3
Date: Mon, 13 Sep 2021 14:50:17 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 07 Sep 2021 15:21:03 GMT
Connection: keep-alive
ETag: "6137835f-267"
Accept-Ranges: bytes
Thus, we can see that when we try to connect from source-pod to the service readiness-svc.default.svc.cluster.local on port 80, connection is refused. This is because the kubelet did not find the /tmp/healthy file in the readiness-pod container to perform a cat operation, consequently marking the Pod readiness-pod not ready to serve traffic and removing it from the backend of the Service readiness-svc. However, the nginx server on the pod can still serve a web application and it will continue to do so if you connect directly to the pod.
Readiness probe failures of containers do not remove the DNS records of Pods. The DNS records of a Pod shares its lifespan with the Pod itself.
This behavior is characteristic of Kubernetes and does not change with network plugins. We have attempted to reproduce the issue and have observed same behavior with AKS clusters using kubenet and Azure CNI network plugins.
I have two ingress controller deployed in two different namespaces in Azure K8s cluster
ingress-A ingress-nginx-controller LoadBalancer 10.0.131.22 20.xx.xx.xx 80:31788/TCP,443:30605/TCP 89s
ingress-A ingress-nginx-controller-admission ClusterIP 10.0.171.187 <none> 443/TCP 89s
ingress-B ingress-nginx-controller LoadBalancer 10.0.61.156 52.xx.xx.xx 80:31966/TCP,443:30125/TCP 18m
ingress-B ingress-nginx-controller-admission ClusterIP 10.0.97.78 <none> 443/TCP 18m
I have already two static IP that are assigned to my domain which i would like to use instead of the one Azure K8s cluster generated.
I try to figure how I can update these IP to mine But i couldnt a find a way.
I have tried this:
kubectl patch svc ingress-nginx-controller -n ingress-nginx-iot -p '{"status": {"loadBalancer": {"ingress":{"ip":"my new ip address"}}}}'
i got this error:
The request is invalid: patch: Invalid value: "map[status:map[loadBalancer:map[ingress:map[ip:20.76.109.236]]]]": cannot restore slice from map
I tried to modify them from Azure K8s cluster portal but didn't work as well.
I wanna deny the container from accessing the public internet.
After a long research, I found this example: DENY external egress traffic :
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: foo-deny-external-egress
spec:
podSelector:
matchLabels:
app: foo
policyTypes:
- Egress
egress:
- ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
to:
- namespaceSelector: {}
But it does not work. Indeed, I ran wget https://google.com and I got a positive response.
Any hint is apperciated
The network policy works fine on calico.
Network Policy has no effect on cluster using flannel network plugin.
As mentioned on this link Flannel is focused on networking. For network policy, other projects such as Calico can be used.
Network policy blocks traffic as expected on cluster using calico
I have two clusters one using flannel and one using calico and test works as expected on calico.
Logs :
$ kubectl apply -f networkpolicy.yaml
networkpolicy.networking.k8s.io/foo-deny-external-egress created
ubuntu#calico-master-1:~$ kubectl run busybox --image=busybox --restart=Never -- sleep 3600
pod/busybox created
$ kubectl get networkpolicy
NAME POD-SELECTOR AGE
foo-deny-external-egress run=busybox 30m
$ kubectl exec -it busybox -- /bin/sh
/ # wget https://google.com
Connecting to google.com (74.125.193.102:443)
wget: can't connect to remote host (74.125.193.102): Connection timed out
On cluster using Flannel creating network policy has no effect
ubuntu#k8s-flannel:~$ kubectl exec -it busybox -- /bin/sh
/ # curl https://google.com
/bin/sh: curl: not found
/ # wget https://google.com
Connecting to google.com (216.58.207.238:443)
wget: note: TLS certificate validation not implemented
Connecting to www.google.com (172.217.20.36:443)
saving to 'index.html'
index.html 100% |*************************************************************************************************************************************************| 12498 0:00:00 ETA
'index.html' saved
Apply policy
$ kubectl apply -f networkpolicy.yaml
networkpolicy.networking.k8s.io/foo-deny-external-egress created
$ kubectl get networkpolicy
NAME POD-SELECTOR AGE
foo-deny-external-egress run=busybox 32m
We can still get to network outside as network policy does not work on flannel.
$ kubectl exec -it busybox -- /bin/sh
/ # wget https://google.com
Connecting to google.com (172.217.22.174:443)
wget: note: TLS certificate validation not implemented
Connecting to www.google.com (172.217.20.36:443)
saving to 'index.html'
index.html 100% |*************************************************************************************************************************************************| 12460 0:00:00 ETA
'index.html' saved
The default CNI on EKS doesn't support network policies. You should either
Install calico as described in https://docs.aws.amazon.com/eks/latest/userguide/calico.html or
Use security group policies for pods https://docs.aws.amazon.com/eks/latest/userguide/security-groups-for-pods.html
I'm using Kubernetes v1.0.6 on AWS that has been deployed using kube-up.sh.
Cluster is using kube-dns.
$ kubectl get svc kube-dns --namespace=kube-system
NAME LABELS SELECTOR IP(S) PORT(S)
kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.0.0.10 53/UDP
Which works fine.
$ kubectl exec busybox -- nslookup kubernetes.default
Server: 10.0.0.10
Address 1: 10.0.0.10 ip-10-0-0-10.eu-west-1.compute.internal
Name: kubernetes.default
Address 1: 10.0.0.1 ip-10-0-0-1.eu-west-1.compute.internal
This is the resolv.conf of a pod.
$ kubectl exec busybox -- cat /etc/resolv.conf
nameserver 10.0.0.10
nameserver 172.20.0.2
search default.svc.cluster.local svc.cluster.local cluster.local eu-west-1.compute.internal
Is it possible to have the containers use an additional nameserver?
I have a secondary DNS based service discovery Oon let's say 192.168.0.1) that I would like my kubernetes containers to be able to use for dns resolution.
ps. A kubernetes 1.1 solution would also be acceptable :)
Thank you very much in advance,
George
The DNS addon README has some details on this. Basically, the pod will inherit the resolv.conf setting of the node it is running on, so you could add your extra DNS server to the nodes' /etc/resolv.conf. The kubelet also takes a --resolv-conf argument that may provide a more explicit way for you to inject the extra DNS server. I don't see that flag documented anywhere yet, however.
In Kuberenetes (probably) 1.2 we'll be moving to a model where nameservers are assumed to be fungible. There are too many resolvers that break when different nameservers serve different subsets of DNS, and there is no real specification here that we can point to.
In other words, we'll start dropping the host's nameserver records from the container's merged resolv.conf and making our own DNS server the only nameserver line. Our DNS will be able to forward requests to upstream nameservers.
I eventually managed to solve this pretty easily by configuring SkyDNS to add an additional nameserver, you can just add the environmental variable SKYDNS_NAMESERVERS as defined in the SkyDNS docs in your SkyDNS replication controller. It has minimal impact and does not depend on node changes etc.
env:
- name: SKYDNS_NAMESERVERS
value: 10.0.0.254:53,10.0.64.254:53
For those usign Kubernetes kube-dns, flag -nameservers nor environment variable SKYDNS_NAMESERVERS are no longer avaiable.
Usage of /kube-dns:
--alsologtostderr log to standard error as well as files
--config-map string config-map name. If empty, then the config-map will not used. Cannot be used in conjunction with federations flag. config-map contains dynamically adjustable configuration.
--config-map-namespace string namespace for the config-map (default "kube-system")
--dns-bind-address string address on which to serve DNS requests. (default "0.0.0.0")
--dns-port int port on which to serve DNS requests. (default 53)
--domain string domain under which to create names (default "cluster.local.")
--healthz-port int port on which to serve a kube-dns HTTP readiness probe. (default 8081)
--kube-master-url string URL to reach kubernetes master. Env variables in this flag will be expanded.
--kubecfg-file string Location of kubecfg file for access to kubernetes master service; --kube-master-url overrides the URL part of this; if neither this nor --kube-master-url are provided, defaults to service account tokens
--log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log-dir string If non-empty, write log files in this directory
--log-flush-frequency duration Maximum number of seconds between log flushes (default 5s)
--logtostderr log to standard error instead of files (default true)
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level log level for V logs
--version version[=true] Print version information and quit
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
Now, either you put your name servers on the hosts resolv.conf, so DNS is inherited from the node, or you use custom resolv.conf and add it to Kubelet with the flag --resolv-conf as explained here
You need to know the IP of your Core DNS to set it as a secondary DNS
Run this command to get the CoreDNS IP:
kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP 43d
metrics-server ClusterIP 172.20.232.147 <none> 443/TCP 43d
This is how I setup DNS in my deployment yaml.
I posted the Google DNS IP (for clarity) and my CoreDNS ip, but you should use your VPC DNS and your CoreDNS server.
containers:
- name: nginx
image: nginx
ports:
- containerPort: 8080
dnsPolicy: None
dnsConfig:
nameservers:
- 8.8.8.8
- 172.20.0.10
searches:
- 1b.svc.cluster.local
- svc.cluster.local
- cluster.local
- ec2.internal
options:
- name: ndots
value: "5"