AKS http-application-routing-nginx-ingress-controller Port 80 is already in use - azure

I have two AKS K8s clusters (ver 1.11.1 in West and North Europe) with http-application-routing addon enabled. Suddenly today pod named addon-http-application-routing-nginx-ingress-controller-xxxx crashed and showed the state:
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 255
kubectl logs addon-http-application-routing-nginx-ingress-controller-xxxx
shows:
I1003 20:21:21.129694 7 flags.go:162] Watching for ingress class: addon-http-application-routing
W1003 20:21:21.129745 7 flags.go:165] only Ingress with class "addon-http-application-routing" will be processed by this ingress controller
F1003 20:21:21.129819 7 main.go:59] Port 80 is already in use. Please check the flag --http-port
If I connect to any node on any cluster and check opened ports with netstat -latun it shows no service on 80 port.
Node restart didn't help.

I just killed the affected node and it started working again. Here's a link where a similar solution also worked:
https://github.com/kubernetes/ingress-nginx/issues/3177

Related

Kubernetes ingress "an error on the server ("") has prevented the request from succeeding"

I have a managed azure cluster (AKS) with nginx ingress in it.
It was working fine but now nginx ingress stopped:
# kubectl -v=7 logs nginx-ingress-<pod-hash> -n nginx-ingress
GET https://<PRIVATE-IP-SVC-Kubernetes>:443/version?timeout=32s
I1205 16:59:31.791773 9 round_trippers.go:423] Request Headers:
I1205 16:59:31.791779 9 round_trippers.go:426] Accept: application/json, */*
Unexpected error discovering Kubernetes version (attempt 2): an error on the server ("") has prevented the request from succeeding
# kubectl describe svc kubernetes
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: <PRIVATE-IP-SVC-Kubernetes>
Port: https 443/TCP
TargetPort: 443/TCP
Endpoints: <PUBLIC-IP-SVC-Kubernetes>:443
Session Affinity: None
Events: <none>
When I tried to curl https://PRIVATE-IP-SVC-Kubernetes:443/version?timeout=32s, I've always seen the same output:
curl: (35) SSL connect error
On my OCP 4.7 (OpenShift Container Registry) instances with 3 of master and 2 of worker nodes, the following log appears after kubelet and oc commands.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.1-5-g76a04fc", GitCommit:"e29b355", GitTreeState:"clean", BuildDate:"2021-06-03T21:19:58Z", GoVersion:"go1.15.7", Compiler:"gc", Platform:"linux/amd64"}
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
$ oc get nodes
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
Also, when I wanted to login to the OCP dashboard, the following error occurred:
error_description": "The authorization server encountered an unexpected condition that prevented it from fulfilling the request
I restarted the whole master node machines then the problem solved.
I faced the same issue with three manager cluster and I was accessing it through ucp client bundle. I figured out 2 out of 3 manager nodes were in not ready state. On debugging further I found space issue on those not ready boxes. After cleaning a little (mainly /var folder) and restart docker, those nodes came to ready state and I'm not getting this error.
On Windows: edit host file (vi /etc/hosts) and replace the line with:
127.0.0.1 ~/.kube/config
Worked for me !!!

Error: forwarding ports: error upgrading connection: error dialing backend: - Azure Kubernetes Service

We have upgraded our Kubernates Service cluster on Azure to latest version 1.12.4. After that we suddenly recognize that pods and nodes cannot communicate between anymore by private ip :
kubectl get pods -o wide -n kube-system -l component=kube-proxy
NAME READY STATUS RESTARTS AGE IP NODE
kube-proxy-bfhbw 1/1 Running 2 16h 10.0.4.4 aks-agentpool-16086733-1
kube-proxy-d7fj9 1/1 Running 2 16h 10.0.4.35 aks-agentpool-16086733-0
kube-proxy-j24th 1/1 Running 2 16h 10.0.4.97 aks-agentpool-16086733-3
kube-proxy-x7ffx 1/1 Running 2 16h 10.0.4.128 aks-agentpool-16086733-4
As you see the node aks-agentpool-16086733-0 has private IP 10.0.4.35 . When we try to check logs on pods which are on this node we got such error:
Get
https://aks-agentpool-16086733-0:10250/containerLogs/emw-sit/nginx-sit-deploy-864b7d7588-bw966/nginx-sit?tailLines=5000&timestamps=true: dial tcp 10.0.4.35:10250: i/o timeout
We got the Tiller ( Helm) on this node as well, and if try to connect to tiller we got such error from Client PC:
shmits-imac:~ andris.shmits01$ helm version Client:
&version.Version{SemVer:"v2.12.3",
GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e",
GitTreeState:"clean"} Error: forwarding ports: error upgrading
connection: error dialing backend: dial tcp 10.0.4.35:10250: i/o
timeout
Does anybody have any idea why the pods and nodes lost connectivity by private IP ?
So , after we scaled down the cluster from 4 nodes to 2 nodes problem disappeared. And after we again scaled up from 2 nodes to 4 everything started working fine
issue could be with apiserver. did you check logs from apiserver pod?
can you run the below command inside cluster. do you 200 OK response?
curl -k -v https://10.96.0.1/version
These issues come when nodes in the Kubernetes cluster created using kubeadm do not get proper Internal IP addresses matching with Nodes/Machines IP.
Issue: If I run helm list command from my cluster then I get below error
helm list
Error: forwarding ports: error upgrading connection: unable to upgrade connection: pod does not exist
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k-master Ready master 3h10m v1.18.5 10.0.0.5 <none> Ubuntu 18.04.3 LTS 4.15.0-58-generic docker://19.3.12
k-worker01 Ready <none> 179m v1.18.5 10.0.0.6 <none> Ubuntu 18.04.3 LTS 4.15.0-58-generic docker://19.3.12
k-worker02 Ready <none> 167m v1.18.5 10.0.2.15 <none> Ubuntu 18.04.3 LTS 4.15.0-58-generic docker://19.3.12
Please note: k-worker02 has internal IP as 10.0.2.15 but I was expecting 10.0.0.7 which is my node/machine IP.
Solution:
Step 1: Connect to Host ( here k-worker02) which does have expected IP
Step 2: open below file
sudo vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Step 3: Edit and append with --node-ip 10.0.0.7
code snippet
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS --node-ip 10.0.0.7
Step 4: Reload the daemon and restart the kubelet service
sudo systemctl daemon-reload && sudo systemctl restart kubelet
Result:
kubectl get nodes -o wide
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k-master Ready master 3h36m v1.18.5 10.0.0.5 <none> Ubuntu 18.04.3 LTS 4.15.0-58-generic docker://19.3.12
k-worker01 Ready <none> 3h25m v1.18.5 10.0.0.6 <none> Ubuntu 18.04.3 LTS 4.15.0-58-generic docker://19.3.12
k-worker02 Ready <none> 3h13m v1.18.5 10.0.0.7 <none> Ubuntu 18.04.3 LTS 4.15.0-58-generic docker://19.3.12
With the above solution, the k-worker02 node has got expected IP (10.0.07) and "forwarding ports:" error stops coming from "helm list or helm install commnad".
Reference: https://networkinferno.net/trouble-with-the-kubernetes-node-ip

New AKS cluster unreachable via network (including dashboard)

Yesterday I spun up an Azure Kubernetes Service cluster running a few simple apps. Three of them have exposed public IPs that were reachable yesterday.
As of this morning I can't get the dashboard tunnel to work or the LoadBalancer IPs themselves.
I was asked by the Azure twitter account to solicit help here.
I don't know how to troubleshoot this apparent network issue - only az seems to be able to touch my cluster.
dashboard error log
❯❯❯ make dashboard ~/c/azure-k8s (master)
az aks browse --resource-group=akc-rg-cf --name=akc-237
Merged "akc-237" as current context in /var/folders/9r/wx8xx8ls43l8w8b14f6fns8w0000gn/T/tmppst_atlw
Proxy running on http://127.0.0.1:8001/
Press CTRL+C to close the tunnel...
error: error upgrading connection: error dialing backend: dial tcp 10.240.0.4:10250: getsockopt: connection timed out
service+pod listing
❯❯❯ kubectl get services,pods ~/c/azure-k8s (master)
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
azure-vote-back ClusterIP 10.0.125.49 <none> 6379/TCP 16h
azure-vote-front LoadBalancer 10.0.185.4 40.71.248.106 80:31211/TCP 16h
hubot LoadBalancer 10.0.20.218 40.121.215.233 80:31445/TCP 26m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 19h
mti411-web LoadBalancer 10.0.162.209 52.168.123.30 80:30874/TCP 26m
NAME READY STATUS RESTARTS AGE
azure-vote-back-7556ff9578-sjjn5 1/1 Running 0 2h
azure-vote-front-5b8878fdcd-9lpzx 1/1 Running 0 16h
hubot-74f659b6b8-wctdz 1/1 Running 0 9s
mti411-web-6cc87d46c-g255d 1/1 Running 0 26m
mti411-web-6cc87d46c-lhjzp 1/1 Running 0 26m
http failures
❯❯❯ curl --connect-timeout 2 -I http://40.121.215.233 ~/c/azure-k8s (master)
curl: (28) Connection timed out after 2005 milliseconds
❯❯❯ curl --connect-timeout 2 -I http://52.168.123.30 ~/c/azure-k8s (master)
curl: (28) Connection timed out after 2001 milliseconds
If you are getting getsockopt: connection timed out while trying to access to your AKS Dashboard, I think deleting tunnelfront pod will help as once you delete the tunnelfront pod, this will trigger creation of new tunnelfront by Master. Its something I have tried and worked for me.
#daniel Did rebooting the agent VM's solve your issue or are you still seeing issues?

Azure aks no nodes found

I created an azure AKS with 3 nodes(Standard DS3 v2 (4 vcpus, 14 GB memory)). I was fiddling with the cluster and created a Deployment with 1000 replicas.After this complete cluster went down.
azureuser#saa:~$ k get cs
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: getsockopt: connection refused
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: getsockopt: connection refused
etcd-0 Healthy {"health": "true"}
From debugging it seems both Scheduler and Controller-manager went down. How to Fix this?
What exactly happened when created a Deployment with 1000 replicas? Should it be taken care by k8s?
Few debugging commands output:
kubectl cluster-info
Kubernetes master is running at https://cg-games-e5252212.hcp.eastus.azmk8s.io:443
Heapster is running at https://cg-games-e5252212.hcp.eastus.azmk8s.io:443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://cg-games-e5252212.hcp.eastus.azmk8s.io:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubernetes-dashboard is running at https://cg-games-e5252212.hcp.eastus.azmk8s.io:443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
Logs for kubectl cluster-info dump # http://termbin.com/e6wb
azureuser#sim:~$ az aks scale -n cg -g cognitive-games -c 4 --verbose
Deployment failed. Correlation ID: 4df797b2-28bf-4c18-a26a-4e341xxxxx. Operation failed with status: 200. Details: Resource state Failed
no nodes displayed
azureuser#si:~$ k get nodes
No resources found
Looks silly but when AKS is created in an RG, surprisingly two RGs are created one with the AKS and another one with some random hash having all the VMS. I've deleted the 2nd RG and the basic AKS stopped working.

cassandra 1.2 nodetool getting 'Failed to connect' when trying to connect to remote node

I am running a 6 node cluster of cassandra 1.2 on an Amazon Web Service VPC with Oracle's 64-bit JVM version 1.7.0_10.
When I'm logged on to one of the nodes (ex. 10.0.12.200) I can run nodetool -h 10.0.12.200 status just fine.
However, if I try to use another ip address in the cluster (10.0.32.153) from that same terminal I get Failed to connect to '10.0.32.153:7199: Connection refused'.
On the 10.0.32.153 node I am trying to connect to I've made the following checks.
From 10.0.12.200 I can run telnet 10.0.32.153 7199 and I get a connection, so it doesn't appear to be a security group/firewall issue to port 7199.
On 10.0.32.153 if I run netstat -ant|grep 7199 I see
tcp 0 0 0.0.0.0:7199 0.0.0.0:* LISTEN
so cassandra does appear to be listening on the port
The cassandra-env.sh file on 10.0.32.153 has all of the JVM_OPTS for jmx active
-Dcom.sun.management.jmxremote.port=7199 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false
The only shot in the dark I've seen while trying to solve this problem while searching the interwebs is to set the following:
JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=10.0.32.153"
But when I do this I don't even get a response. It just hangs.
Any guidance would be greatly appreciated.
The issue did end up being a firewall/security group issue. While it is true that the jmx port 7199 is used, apparently other ports are used randomly for rmi. Cassandra port usage - how are the ports used?
So the solution is to open up the firewalls then configure the cassandra-env.sh to include
JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=<ip>

Resources