I'm having trouble getting an nginx-ingress controller to work on an Azure Kubernetes Service; it's currently returning 502 Bad Gateway each time I try to hit some Web APIs exposed as Services.
Because I must use an existing certificate, I followed https://learn.microsoft.com/en-us/azure/aks/ingress-own-tls to set up the controller and have followed https://www.markbrilman.nl/2011/08/howto-convert-a-pfx-to-a-seperate-key-crt-file/ to generate a cert and key from a PFX (how the certificate was exported from an Azure Key Vault). I created the secret "aks-ingress-tls" using the certificate including the intermediate and root ceritficates and the decrypted key file.
I have a YAML file to create a deployment, a service to expose it, and an ingress to route to it. Applying this YAML I can access the services via their IP addresses in HTTP, but using HTTPS to the Ingress Controller's EXTERNAL_IP always gives the 502 error.
My YAML File (redacted):
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-api
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: my-api
replicas: 3
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: my-api
image: [REDACTED]/my-api:1.0
ports:
- containerPort: 443
- containerPort: 80
imagePullSecrets:
- name: data-creds
---
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: my-service
spec:
ports:
- name: https
port: 443
targetPort: 443
protocol: TCP
- name: http
port: 80
targetPort: 80
protocol: TCP
selector:
app: my-api
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
tls:
- hosts:
- [REDACTED].co.uk
secretName: aks-ingress-tls
rules:
- host: [REDACTED].co.uk
http:
paths:
- path: /
backend:
serviceName: my-service
servicePort: 443
I added a record to my hosts file (I'm on Windows so can't use curl's --resolve) to map [REDACTED].co.uk to the ingress controller's EXTERNAL_IP so I can try accessing it. That's when I get the errors.
A curl -v https://[REDACTED].co.uk gives this:
VERBOSE: GET https://[REDACTED].co.uk/ with 0-byte payload
curl : The request was aborted: Could not create SSL/TLS secure channel.
At line:1 char:1
+ curl -v https://[REDACTED].co.uk
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebException
+ FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand
Looking at logs for one of the ingress controller's pods:
10.244.1.1 - [10.244.1.1] - - [25/Apr/2019:13:39:20 +0000] "GET / HTTP/2.0" 502 559 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36" 10 0.001 [default-sub360-auth-service-443] 10.244.1.254:443, 10.244.1.3:443, 10.244.1.4:443 0, 0, 0 0.000, 0.000, 0.000 502, 502, 502 e44e21c8a2f61f5137c9afdfc64c6584
2019/04/25 13:39:20 [error] 1622#1622: *1127096 connect() failed (111: Connection refused) while connecting to upstream, client: 10.244.1.1, server: [REDACTED].co.uk, request: "GET /favicon.ico HTTP/2.0", upstream: "https://10.244.1.254:443/favicon.ico", host: "[REDACTED].co.uk", referrer: "https://[REDACTED].co.uk/"
2019/04/25 13:39:20 [error] 1622#1622: *1127096 connect() failed (111: Connection refused) while connecting to upstream, client: 10.244.1.1, server: [REDACTED].co.uk, request: "GET /favicon.ico HTTP/2.0", upstream: "https://10.244.1.3:443/favicon.ico", host: "[REDACTED].co.uk", referrer: "https://[REDACTED].co.uk/"
2019/04/25 13:39:20 [error] 1622#1622: *1127096 connect() failed (111: Connection refused) while connecting to upstream, client: 10.244.1.1, server: [REDACTED].co.uk, request: "GET /favicon.ico HTTP/2.0", upstream: "https://10.244.1.4:443/favicon.ico", host: "[REDACTED].co.uk", referrer: "https://[REDACTED].co.uk/"
10.244.1.1 - [10.244.1.1] - - [25/Apr/2019:13:39:20 +0000] "GET /favicon.ico HTTP/2.0" 502 559 "https://[REDACTED].co.uk/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36" 26 0.000 [default-sub360-auth-service-443] 10.244.1.254:443, 10.244.1.3:443, 10.244.1.4:443 0, 0, 0 0.000, 0.000, 0.004 502, 502, 502 63b6ed4414bf32694de3d136f7f277aa
Can anyone point me to what I need to look at or do to get this working now?
For your issue, the ingress uses the HTTPS protocol with the port 443, so you do not need to expose the port 443 for your container. Just expose the port that which your application listens to.
For you, it means you just expose the port 80 for your container and the service. You also need to remove the annotation nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" and change the servicePort value into 80.
Note: Add the DNS name into the certificate is also important.
Related
Getting 502 bad gateway error while doing performance testing
it occurs randomly on random API only 2-3 times in a 30 min execution
ingress logs show this error
11.150.71.00 - - [22/Dec/2022:10:44:17 +0000] "POST /api/quotes/test HTTP/1.1" 502 150 "-" "AmazonAPIGateway_hodcqxftdw" 1466 0.030 [cpq-cpq-server-service-9001] [] 11.150.71.177:9001 0 0.028 502 1322918b6f840892739d4dcf194e2226
I have added keep alive time for ingress still no success,
CPU and Memory both are stable there is no spike
Ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: testIngress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-tls-secret: "test/test-secret"
nginx.ingress.kubernetes.io/auth-tls-verify-client: "off"
spec:
rules:
- host: test.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: testService
port:
number: 9001
I have a nodejs rest service deployed in k8s exposed with nginx ingress. It responds to a basic get, but when I pass a URL parameter I get a 502.
import express from "express";
const app = express();
app.get("/service-invoice", async (req, res) => {
res.send(allInvoices);
}
app.listen(80);
Where allInvoices is just a collection of invoice objects loaded from MongoDB.
I deploy this to k8s with the following ingress config:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: service-invoice-read
namespace: ctx-service-invoice
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- http:
paths:
- path: /service-invoice-read(/|$)(.*)
pathType: Prefix
backend:
service:
name: service-invoice-read
port:
number: 80
Calling this with curl:
curl localhost:30000/service-invoice-read/service-invoice
I get back a valid json response. So far, so good.
But, I also want to access these objects by Id. To do that I have the following code:
app.get("/service-invoice/:id", async (req, res) => {
try {
const id = req.params.id;
const invoice = // code to load invoice by id from mongo
res.send(invoice);
} catch (e) {
res.status(sc.NOT_FOUND).send(error);
}
});
Calling this with curl:
curl localhost:30000/service-invoice-read/service-invoice/e98e03b8-b590-4ca4-978d-270986b7d26e
Results in a 502 - Bad Gateway error.
I can't see any errors in my pod's logs, so I'm pretty sure this is coming from nginx.
I don't really understand where this is coming from. I've tried without the try/catch to see in the logs if it blows up and still no joy.
Here's my ingress logs, as requested in the comments:
2022/03/03 18:45:21 [error] 847#847: *4524 upstream prematurely closed connection while reading response header from upstream, client: 10.42.1.1, server: _, request: "GET /service-invoice-read/service-invoice/6220d042a95986f58c46356f HTTP/1.1", upstream: "http://10.42.1.100:80/service-invoice/6220d042a95986f58c46356f", host: "localhost:30000"
2022/03/03 18:45:21 [error] 847#847: *4524 connect() failed (111: Connection refused) while connecting to upstream, client: 10.42.1.1, server: _, request: "GET /service-invoice-read/service-invoice/6220d042a95986f58c46356f HTTP/1.1", upstream: "http://10.42.1.100:80/service-invoice/6220d042a95986f58c46356f", host: "localhost:30000"
2022/03/03 18:45:21 [error] 847#847: *4524 connect() failed (111: Connection refused) while connecting to upstream, client: 10.42.1.1, server: _, request: "GET /service-invoice-read/service-invoice/6220d042a95986f58c46356f HTTP/1.1", upstream: "http://10.42.1.100:80/service-invoice/6220d042a95986f58c46356f", host: "localhost:30000"
10.42.1.1 - - [03/Mar/2022:18:45:21 +0000] "GET /service-invoice-read/service-invoice/6220d042a95986f58c46356f HTTP/1.1" 502 150 "-" "curl/7.68.0" 140 0.006 [ctx-service-invoice-service-invoice-read-80] [] 10.42.1.100:80, 10.42.1.100:80, 10.42.1.100:80 0, 0, 0 0.004, 0.004, 0.000 502, 502, 502 b78e6879fabe2d5947525a2b694b4b9f
W0303 18:45:21.529749 7 controller.go:1076] Service "ctx-service-invoice/service-invoice-read" does not have any active Endpoint.
Does anyone know what I'm doing wrong here?
The problem wasn't what it seemed. In this case, the configuration is working fine. The real problem is that there was an error in the code that was being suppressed by a global exception handler without being logged. For some reason, this resulted in a 502 -- though I still don't understand why I got that exact response but I'm not specifically interested.
The aim of the global exception handler is to keep the service running when it would otherwise die. Given that a service dying in k8s is perfectly acceptable I've removed this handler and allowed the pod to die, which gives me a lot more information about what is going on.
I'm trying to create an internal mixed protocol loadbalancer in Azure AKS (tried 1.15.5, 1.15.7 and 1.16.4) using this yaml:
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-mixed-protocols: "true"
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
name: consullb
spec:
ports:
- port: 8500
targetPort: 8500
name: http
protocol: TCP
- port: 8400
targetPort: 8400
name: rpc
protocol: TCP
- port: 8301
targetPort: 8301
name: serflan-tcp
protocol: TCP
- port: 8302
targetPort: 8302
name: serfwan-tcp
protocol: TCP
- port: 8300
targetPort: 8300
name: server
protocol: TCP
- port: 8600
targetPort: 8600
name: consuldns-tcp
protocol: TCP
- port: 8301
targetPort: 8301
name: serflan-udp
protocol: UDP
- port: 8302
targetPort: 8302
name: serfwan-udp
protocol: UDP
- port: 8600
targetPort: 8600
name: consuldns-udp
protocol: UDP
selector:
component: consul-1582621245-consul
type: LoadBalancer
I get the following error:
cannot create an external load balancer with mix protocols
I tested two different clusters one with the Standard SKU and one with the Basic SKU.
Anything I'm missing here? Or could someone point me other aspects to try/troubleshoot?
I am trying to configure azure kubernetes cluster and created one on the portal.dockerized .net core webapi project and also published the image to azure container register. After applying manifest file , i get the message of service created and also the external IP. however when I do get pods I get status "Pending" all the time
NAME READY STATUS RESTARTS AGE
kubdemo1api-6c67bf759f-6slh2 0/1 Pending 0 6h
here is my yaml manifest file, can someone suggest what is wrong here?
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kubdemo1api
labels:
name: kubdemo1api
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
minReadySeconds: 30
selector:
matchLabels:
app: kubdemo1api
template:
metadata:
labels:
app: kubdemo1api
version: "1.0"
tier: backend
spec:
containers:
- name: kubdemo1api
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
timeoutSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
timeoutSeconds: 10
image: my container registry image address
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
timeoutSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
timeoutSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: azkubdemoapi1
spec:
ports:
-
port: 80
selector:
app: kubdemo1api
type: LoadBalancer
EDIT:
Output kubectl describe pods is this
here is it
Normal Scheduled 2m default-scheduler Successfully assigned default/kubdemo1api-697d5655c-64fnj to aks-agentpool-87689508-0
Normal Pulling 37s (x4 over 2m) kubelet, aks-agentpool-87689508-0 pulling image "myacrurl/azkubdemo:v2"
Warning Failed 37s (x4 over 2m) kubelet, aks-agentpool-87689508-0 Failed to pull image "my acr url": [rpc error: code = Unknown desc = Error response from daemon: Get https://myacrurl/v2/azkubdemo/manifests/v2: unauthorized: authentication required, rpc error: code = Unknown desc = Error response from daemon: Get https://myacrurl/v2/azkubdemo/manifests/v2: unauthorized: authentication required]
Warning Failed 37s (x4 over 2m) kubelet, aks-agentpool-87689508-0 Error: ErrImagePull
Normal BackOff 23s (x6 over 2m) kubelet, aks-agentpool-87689508-0 Back-off pulling image "myacrlurl/azkubdemo:v2"
Warning Failed 11s (x7 over 2m) kubelet, aks-agentpool-87689508-0 Error: ImagePullBackOff
For the error that you provide, it shows you have to authenticate to pull the image from the Azure Container Registry.
Actually, you just need permission to pull the image and the acrpull role is totally enough. There are two ways to achieve it.
One is that just grant the AKS access to the Azure Container Registry. It's simplest on my side. Just need to create the role assignment for the service principal which the AKS used. See Grant AKS access to ACR for the whole steps.
The other one is that use the Kubernetes secret. It's a little more complex than the first one. You need to create a new service principal differ from the one AKS used and grant access to it, then create the kubernetes secret with the service principal. See Access with Kubernetes secret for the whole steps.
This Yaml is Wrong Can you provide the correct yaml, the intending are wrong. Try Below YAML
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kubdemo1api
labels:
name: kubdemo1api
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
minReadySeconds: 30
selector:
matchLabels:
app: kubdemo1api
template:
metadata:
labels:
app: kubdemo1api
version: "1.0"
tier: backend
spec:
containers:
- name: kubdemo1api
image: nginx
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /
port: 80
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
timeoutSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: azkubdemoapi1
spec:
ports:
- port: 80
selector:
app: kubdemo1api
type: LoadBalancer
Kubernetes version --> 1.5.2
I am setting up DNS for Kubernetes services for the first time and I came across SkyDNS.
So following documentation, my skydns-svc.yaml file is :
apiVersion: v1
kind: Service
spec:
clusterIP: 10.100.0.100
ports:
- name: dns
port: 53
protocol: UDP
targetPort: 53
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
selector:
k8s-app: kube-dns
sessionAffinity: None
type: ClusterIP
And my skydns-rc.yaml file is :
apiVersion: v1
kind: ReplicationController
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v18
template:
metadata:
creationTimestamp: null
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
version: v18
spec:
containers:
- args:
- --domain=kube.local
- --dns-port=10053
image: gcr.io/google_containers/kubedns-amd64:1.6
imagePullPolicy: IfNotPresent
name: kubedns
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
terminationMessagePath: /dev/termination-log
- args:
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
imagePullPolicy: IfNotPresent
name: dnsmasq
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
- args:
- -cmd=nslookup kubernetes.default.svc.kube.local 127.0.0.1 >/dev/null &&
nslookup kubernetes.default.svc.kube.local 127.0.0.1:10053 >/dev/null
- -port=8080
- -quiet
image: gcr.io/google_containers/exechealthz-amd64:1.0
imagePullPolicy: IfNotPresent
name: healthz
ports:
- containerPort: 8080
protocol: TCP
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
Also on my minions, I updated the /etc/systemd/system/multi-user.target.wants/kubelet.service file and added the following under the ExecStart section :
ExecStart=/usr/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_ARGS \
--cluster-dns=10.100.0.100 \
--cluster-domain=kubernetes \
Having done all of this and having successfully brought up the rc & svc :
[root#kubernetes-master DNS]# kubectl get po | grep dns
kube-dns-v18-hl8z6 3/3 Running 0 6s
[root#kubernetes-master DNS]# kubectl get svc | grep dns
kube-dns 10.100.0.100 <none> 53/UDP,53/TCP 20m
This is all that I got from a config standpoint. Now in order to test my setup, I downloaded busybox and tested a nslookup
[root#kubernetes-master DNS]# kubectl get svc | grep kubernetes
kubernetes 10.100.0.1 <none> 443/TCP
[root#kubernetes-master DNS]# kubectl exec busybox -- nslookup kubernetes
nslookup: can't resolve 'kubernetes'
Server: 10.100.0.100
Address 1: 10.100.0.100
Is there something that I have missed ?
EDIT ::
Going through the logs, I see something that might explain why this is not working :
kubectl logs $(kubectl get pods -l k8s-app=kube-dns -o name) -c kubedns
.
.
.
E1220 17:44:48.403976 1 reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: Get https://10.100.0.1:443/api/v1/endpoints?resourceVersion=0: x509: failed to load system roots and no roots provided
E1220 17:44:48.487169 1 reflector.go:216] pkg/dns/dns.go:155: Failed to list *api.Service: Get https://10.100.0.1:443/api/v1/services?resourceVersion=0: x509: failed to load system roots and no roots provided
I1220 17:44:48.487716 1 dns.go:172] Ignoring error while waiting for service default/kubernetes: Get https://10.100.0.1:443/api/v1/namespaces/default/services/kubernetes: x509: failed to load system roots and no roots provided. Sleeping 1s before retrying.
E1220 17:44:49.410311 1 reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: Get https://10.100.0.1:443/api/v1/endpoints?resourceVersion=0: x509: failed to load system roots and no roots provided
I1220 17:44:49.492338 1 dns.go:172] Ignoring error while waiting for service default/kubernetes: Get https://10.100.0.1:443/api/v1/namespaces/default/services/kubernetes: x509: failed to load system roots and no roots provided. Sleeping 1s before retrying.
E1220 17:44:49.493429 1 reflector.go:216] pkg/dns/dns.go:155: Failed to list *api.Service: Get https://10.100.0.1:443/api/v1/services?resourceVersion=0: x509: failed to load system roots and no roots provided
.
.
.
Looks like kubedns is unable to authorize against K8S master node. I even tried to do a manual call :
curl -k https://10.100.0.1:443/api/v1/endpoints?resourceVersion=0
Unauthorized
Looks like the kube-dns pod is not able to authenticate with the kubernetes api server. I don't see any secret and serviceaccount in the YAML file for the kube-dns pod.
I suggest doing the following:
Create a k8s secret using kubectl create secret for the kube-dns pod with the right certificate file ca.crt and token:
$ kubectl get secrets -n=kube-system | grep dns
kube-dns-token-66tfx kubernetes.io/service-account-token 3 1d
Create a k8s serviceaccount using kubectl create serviceaccount for the kube-dns pod:
$ kubectl get serviceaccounts -n=kube-system | grep dns
kube-dns 1 1d`
Mount the secret at /var/run/secrets/kubernetes.io/serviceaccount inside the kube-dns container in the YAML file:
...
kind: Pod
...
spec:
...
containers:
...
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-dns-token-66tfx
readOnly: true
...
volumes:
- name: kube-dns-token-66tfx
secret:
defaultMode: 420
secretName: kube-dns-token-66tfx
Here are the links about creating serviceaccounts for pods:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
https://kubernetes.io/docs/admin/service-accounts-admin/