Kubernetes Ingress returns "Cannot get /" - azure

I'm trying to deploy a NodeRED pod on my cluster, and have created a service and ingress for it so it can be accessible as I access the rest of my cluster, under the same domain. However when i try to access it via host-name.com/nodered I receive Cannot GET /nodered.
Following are the templates used and describes of all the involved components.
apiVersion: v1
kind: Service
metadata:
name: nodered-app-service
namespace: {{ kubernetes_namespace_name }}
spec:
ports:
- port: 1880
targetPort: 1880
selector:
app: nodered-service-pod
I have also tried with port:80 for the service, to no avail.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodered-service-deployment
namespace: {{ kubernetes_namespace_name }}
labels:
app: nodered-service-deployment
name: nodered-service-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nodered-service-pod
template:
metadata:
labels:
app: nodered-service-pod
target: gateway
buildVersion: "{{ kubernetes_build_number }}"
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: nodered-service-account
automountServiceAccountToken: false
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
containers:
- name: nodered-service-statefulset
image: nodered/node-red:{{ nodered_service_version }}
imagePullPolicy: {{ kubernetes_image_pull_policy }}
readinessProbe:
httpGet:
path: /
port: 1880
initialDelaySeconds: 30
timeoutSeconds: 1
periodSeconds: 10
failureThreshold: 3
livenessProbe:
httpGet:
path: /
port: 1880
initialDelaySeconds: 30
timeoutSeconds: 1
periodSeconds: 10
failureThreshold: 3
securityContext:
allowPrivilegeEscalation: false
resources:
limits:
memory: "2048M"
cpu: "1000m"
requests:
memory: "500M"
cpu: "100m"
ports:
- containerPort: 1880
name: port-name
envFrom:
- configMapRef:
name: nodered-service-configmap
env:
- name: BUILD_TIME
value: "{{ kubernetes_build_time }}"
The target: gateway refers to the ingress controller
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nodered-ingress
namespace: {{ kubernetes_namespace_name }}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: host-name.com
http:
paths:
- path: /nodered(/|$)(.*)
backend:
serviceName: nodered-app-service
servicePort: 1880
The following is what my Describes show
Name: nodered-app-service
Namespace: nodered
Labels: <none>
Annotations: <none>
Selector: app=nodered-service-pod
Type: ClusterIP
IP: 55.3.145.249
Port: <unset> 1880/TCP
TargetPort: port-name/TCP
Endpoints: 10.7.0.79:1880
Session Affinity: None
Events: <none>
Name: nodered-service-statefulset-6c678b7774-clx48
Namespace: nodered
Priority: 0
Node: aks-default-40441371-vmss000007/10.7.0.66
Start Time: Thu, 26 Aug 2021 14:23:33 +0200
Labels: app=nodered-service-pod
buildVersion=latest
pod-template-hash=6c678b7774
target=gateway
Annotations: <none>
Status: Running
IP: 10.7.0.79
IPs:
IP: 10.7.0.79
Controlled By: ReplicaSet/nodered-service-statefulset-6c678b7774
Containers:
nodered-service-statefulset:
Container ID: docker://a6f8c9d010feaee352bf219f85205222fa7070c72440c885b9cd52215c4c1042
Image: nodered/node-red:latest-12
Image ID: docker-pullable://nodered/node-red#sha256:f02ccb26aaca2b3ee9c8a452d9516c9546509690523627a33909af9cf1e93d1e
Port: 1880/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 26 Aug 2021 14:23:36 +0200
Ready: True
Restart Count: 0
Limits:
cpu: 1
memory: 2048M
Requests:
cpu: 100m
memory: 500M
Liveness: http-get http://:1880/ delay=30s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:1880/ delay=30s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
nodered-service-configmap ConfigMap Optional: false
Environment:
BUILD_TIME: 2021-08-26T12:23:06.219818+0000
Mounts: <none>
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes: <none>
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
Name: nodered-app-service
Namespace: nodered
Labels: <none>
Annotations: <none>
Selector: app=nodered-service-pod
Type: ClusterIP
IP: 55.3.145.249
Port: <unset> 1880/TCP
TargetPort: port-name/TCP
Endpoints: 10.7.0.79:1880
Session Affinity: None
Events: <none>
PS C:\Users\hid5tim> kubectl describe ingress -n nodered
Name: nodered-ingress
Namespace: nodered
Address: 10.7.31.254
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
host-name.com
/nodered(/|$)(.*) nodered-app-service:1880 (10.7.0.79:1880)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: false
Events: <none>
The logs from the ingress controller are below. I've been on this issue for the last 24 hours or so and its tearing me apart, the setup looks identical to other deployments I have that are functional. Could this be something wrong with the nodered image? I have checked and it does expose 1880.
194.xx.xxx.x - [194.xx.xxx.x] - - [26/Aug/2021:10:40:12 +0000] "GET /nodered HTTP/1.1" 404 146 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" 871 0.008 [nodered-nodered-app-service-80] 10.7.0.68:1880 146 0.008 404
74887808fa2eb09fd4ed64061639991e ```

as the comment by Andrew points out, I was using rewrite annotation wrong, once I removed the (/|$)(.*) and specified the path type as prefix it worked.

Related

Nginx Ingress Controller getting error in Node.js application with routes prefixed

I'm using Ingress Nginx Controller in my Kubernetes Cluster hosted on Google Kubernetes Engine. The application is a Node.js app.
When I integrated my app with Rollbar (logger service) I started to notice repetitive errors every ~15 seconds (154K times in one week).
Error: Cannot GET /
Print:
I think the reason is the fact that the my Node.js application uses the /v1 prefix in the routes, i.e the / route doesn'i exists.
PS: Rollbar is linked in Develop (local), Testing (Heroku) and Production (GKE) environments and the error only occurs in production.
My ingress file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-app
spec:
rules:
- host: api.company.com
http:
paths:
- path: /v1
backend:
serviceName: company-prod-v1-service
servicePort: 3000
Ingress documentation says something about / endpoint, but i don't understand very well.
I need to remove this error. Can any jedi masters help me fix this error?
Thanks in advance
[UPDATE]
Deployment File
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deploy-v1
labels:
app: app-v1
version: 1.10.0-2
spec:
selector:
matchLabels:
app: app-v1
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
minReadySeconds: 30
progressDeadlineSeconds: 60
template:
metadata:
labels:
app: app-v1
version: 1.10.0-2
spec:
serviceAccountName: gke-account
containers:
- name: app-container
image: registry.gitlab.com/company/app-back:1.10.0
lifecycle:
postStart:
exec:
command:
- "/bin/sh"
- "-c"
- >
if [ -f "$SECRETS_FOLDER/$APPLE_NOTIFICATION_KEY" ]; then
cp $SECRETS_FOLDER/$APPLE_NOTIFICATION_KEY $APP_FOLDER;
fi;
- name: cloud-sql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command:
- "/cloud_sql_proxy"
- "-instances=thermal-petal-283313:us-east1:app-instance=tcp:5432"
securityContext:
runAsNonRoot: true
lifecycle:
preStop:
exec:
command: ["/bin/sleep", "300"]
imagePullSecrets:
- name: registry-credentials
Ingress Controller Pod File (auto generate in installation command)
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-06-16T03:08:53Z"
generateName: ingress-nginx-controller
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
name: ingress-nginx-controller
namespace: ingress-nginx
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: ingress-nginx-controller
resourceVersion: "112544950"
selfLink: /api/v1/namespaces/ingress-nginx/pods/ingress-nginx-controller
spec:
containers:
image: k8s.gcr.io/ingress-nginx/controller:v0.46.0
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
- containerPort: 8443
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1

Access Pod By Internet From AKS Private Cluster

I have a fully Private AKS Cluster, I have it setup on a private VNET which I access through an Azure Bastion to run kubectl commands. I have also set up a DevOps pipeline which uses a self hosted agent to run commands on the private cluster. All my pods and ingresses seem to be running fine. However, when I try to access my ingress using a hostname (by mapping the public ip) I am getting a 404 not found. When verifying against my public cluster setup, I don't see any issues. Can someone please shed some light on why I cannot access my pod which appears to be running fine?
Also, it seems I cannot access the external IP of the ingress even on the virtual machine which is on the virtual network. But I can run the kubectl commands and access the kubernetes dashboard.
---
apiVersion: v1
kind: Service
metadata:
namespace: app-auth
labels:
environment: staging
name: app-auth-staging # The name of the app
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: app-auth-staging
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-auth-staging
namespace: app-auth
labels:
app: app-auth-staging
environment: staging # The environment being used
app-role: api # The application type
tier: backend # The tier that this app represents
spec:
replicas: 1
selector:
matchLabels:
app: app-auth-staging
template:
metadata:
labels:
app: app-auth-staging
environment: staging
app-role: api
tier: backend
annotations:
build: _{Tag}_
spec:
containers:
- name: auth
image: auth.azurecr.io/auth:_{Tag}_ # Note: Do not modify this field.
imagePullPolicy: Always
env:
- name: ConnectionStrings__ZigzyAuth # Note: The appsettings value being replaced
valueFrom:
secretKeyRef:
name: connectionstrings
key: _{ConnectionString}_ # Note: This is an environmental variable, it is replaced accordingly in DevOps
ports:
- containerPort: 80
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 15
periodSeconds: 20
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: agentpool
operator: In
values:
- general
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "aks-provider"
nodePublishSecretRef:
name: aks-prod-credstore
imagePullSecrets:
- name: aks-prod-acrps
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-auth-staging-ingress-main # The name of the ingress, ex: app-auth-ingress-main
namespace: app-auth
labels:
environment: staging
annotations:
kubernetes.io/ingress.class: nginx
# nginx.ingress.kubernetes.io/enable-cors: "true"
# nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, OPTIONS"
# nginx.ingress.kubernetes.io/cors-allow-origin: "https://admin.example.com"
# nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
spec:
tls:
- hosts:
- stagingauth.app.com # Modify
- frontend.21.72.207.63.nip.io
- aksstagingauth.app.com
secretName: zigzypfxtls
rules:
- host: stagingauth.app.com
http:
paths:
- backend:
serviceName: zigzy-auth-staging # Modify
servicePort: 80
path: /
- host: frontend.21.72.207.63.nip.io
http:
paths:
- backend:
serviceName: app-auth-staging # Modify
servicePort: 80
path: /
- host: aksstagingauth.app.com
http:
paths:
- backend:
serviceName: app-auth-staging # Modify
servicePort: 80
path: /

NGINX Ingress external oauth with Azure Active Directory

I want to use Azure Active Directory as an external oauth2 provider to protect my services on the ingress level. In the past, I used basic ouath and everything worked like expected. But nginx provides the extern ouath methode which sounds much more confortable!
For that I created an SP:
$ az ad sp create-for-rbac --skip-assignment --name test -o table
AppId DisplayName Name Password Tenant
<AZURE_CLIENT_ID> test http://test <AZURE_CLIENT_SECRET> <TENANT_ID>
My ingress ressource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
namespace: ingress-nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/auth-url: "https://\$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://\$host/oauth2/start?rd=$escaped_request_uri"
# nginx.ingress.kubernetes.io/auth-type: basic
# nginx.ingress.kubernetes.io/auth-secret: basic-auth
# nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required'
And the externel-oauth:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oauth2-proxy
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- args:
- --provider=azure
- --email-domain=microsoft.com
- --upstream=file:///dev/null
- --http-address=0.0.0.0:4180
- --azure-tenant=$AZURE_TENANT_ID
env:
- name: OAUTH2_PROXY_CLIENT_ID
value: $API_CLIENT_ID
- name: OAUTH2_PROXY_CLIENT_SECRET
value: $API_CLIENT_SECRET
- name: OAUTH2_PROXY_COOKIE_SECRET
value: $API_COOKIE_SECRET
# created by docker run -ti --rm python:3-alpine python -c 'import secrets,base64; print(base64.b64encode(base64.b64encode(secrets.token_bytes(16))));
image: docker.io/colemickens/oauth2_proxy:latest
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: oauth2-proxy
namespace: kube-system
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
app: oauth2-proxy
It looks like something wents wrong but I have no idea what I missed.
When i try to enter the page it loads up to one minute and ends in a '500 internal server error'.
The logs of the ingress controller show an infinity loop of following:
10.244.2.1 - - [16/Jan/2020:15:32:30 +0000] "GET /oauth2/auth HTTP/1.1" 499 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/xxxxx Safari/xxxx" 727 0.003 [upstream-default-backend] [] - - - - <AZURE_CLIENT_ID>
So you need another ingress for the oAuth deployment as well. here's how my setup looks like:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana-ingress-oauth
namespace: grafana
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: xxx
http:
paths:
- path: /oauth2
backend:
serviceName: oauth2-proxy
servicePort: 4180
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana-ingress
namespace: grafana
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: "true"
certmanager.k8s.io/cluster-issuer: letsencrypt-production
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
spec:
rules:
- host: xxx
http:
paths:
- path: /
backend:
serviceName: grafana
servicePort: 80
this way second ingress redirects to first and first does the auth and redirects back

Azure Kubernetes Service - Http Routing Giving 502

We are trying to host our API in AKS but we are hitting the same issue no matter what ingress option we use. We are running the latest version of kubernetes(1.11.2) on AKS with Http application routing configured. All the services and Pods are healthy according to the dashboard and the DNS Zone /healthz is returning 200, so that's working.
All of the api services are built using the latest version of dotnet core with the / route configured to return a status code 200.
Here's the services & deployments:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: accounts-api
spec:
replicas: 3
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: accounts-api
spec:
containers:
- name: accounts-api
# image: mycompany.azurecr.io/accounts.api:#{Build.BuildId}#
image: mycompany.azurecr.io/accounts.api:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: programs-api
spec:
replicas: 3
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: programs-api
spec:
containers:
- name: programs-api
# image: mycompany.azurecr.io/programs.api:#{Build.BuildId}#
image: mycompany.azurecr.io/programs.api:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: teams-api
spec:
replicas: 3
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: teams-api
spec:
containers:
- name: teams-api
# image: mycompany.azurecr.io/teams.api:#{Build.BuildId}#
image: mycompany.azurecr.io/teams.api:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: payments-api
spec:
replicas: 3
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: payments-api
spec:
containers:
- name: payments-api
# image: mycompany.azurecr.io/payments.api:#{Build.BuildId}#
image: mycompany.azurecr.io/payments.api:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: accounts-api-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: accounts-api
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: programs-api-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: programs-api
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: teams-api-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: teams-api
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: payments-api-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: payments-api
type: ClusterIP
---
Firstly, we tried to use Path Based Fanout, like so:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-api-ingress
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: mycompany-api.d6b1cf1ede294842b0ed.westeurope.aksapp.io
http:
paths:
- path: /accounts-api
backend:
serviceName: accounts-api-service
servicePort: 80
- path: /programs-api
backend:
serviceName: programs-api-service
servicePort: 80
- path: /teams-api
backend:
serviceName: teams-api-service
servicePort: 80
- path: /workouts-api
backend:
serviceName: payments-api-service
servicePort: 80
---
But we were hitting a 502 bad gateway for each path. We then tried aggregating the ingresses and assigning a host per service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-api-ingress
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: accounts-api.d6b1cf1ede294842b0ed.westeurope.aksapp.io
http:
paths:
- path: /
backend:
serviceName: accounts-api-service
servicePort: 80
- host: programs-api.d6b1cf1ede294842b0ed.westeurope.aksapp.io
http:
paths:
- path: /
backend:
serviceName: programs-api-service
servicePort: 80
- host: teams-api.d6b1cf1ede294842b0ed.westeurope.aksapp.io
http:
paths:
- path: /
backend:
serviceName: teams-api-service
servicePort: 80
- host: payments-api.d6b1cf1ede494842b0ed.westeurope.aksapp.io
http:
paths:
- path: /
backend:
serviceName: payments-api-service
servicePort: 80
---
The Azure DNS Zone is adding the correct txt and A records for each of the services but we are still hitting the 502.
From what we can tell from googling it seems as though the wiring of the ingress to the services is screwy, but as far as we can see our deploy script looks ok. Ideally we would like to use the path based fanout option, so what could the issue be? Base Path configuration?

loadbalancer service won't redirect to desired pod

I'm playing around with kubernetes and I've set up my environment with 4 deployments:
hello: basic "hello world" service
auth: provides authentication and encryption
frontend: an nginx reverse proxy which represents a single-point-of-entry from the outside and routes to the accurate pods internally
nodehello: basic "hello world" service, written in nodejs (this is what I contributed)
For the hello, auth and nodehello deployments I've set up each one internal service.
For the frontend deployment I've set up a load-balancer service which would be exposed to the outside world. It uses a config map nginx-frontend-conf to redirect to the appropriate pods and has the following contents:
upstream hello {
server hello.default.svc.cluster.local;
}
upstream auth {
server auth.default.svc.cluster.local;
}
upstream nodehello {
server nodehello.default.svc.cluster.local;
}
server {
listen 443;
ssl on;
ssl_certificate /etc/tls/cert.pem;
ssl_certificate_key /etc/tls/key.pem;
location / {
proxy_pass http://hello;
}
location /login {
proxy_pass http://auth;
}
location /nodehello {
proxy_pass http://nodehello;
}
}
When calling the frontend endpoint using curl -k https://<frontend-external-ip> I get routed to an available hello pod which is the expected behavior.
When calling https://<frontend-external-ip>/nodehello however I won't get routed to a nodehello pod, but instead to a hellopod again.
I suspect the upstream nodehello configuration to be the failing part. I'm not sure how service discovery works here, i.e. how the dns name nodehello.default.svc.cluster.local would be exposed. I'd appreciate an explanation on how it works and what I did wrong.
yaml files used
deployments/hello.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello
spec:
replicas: 3
template:
metadata:
labels:
app: hello
track: stable
spec:
containers:
- name: hello
image: "udacity/example-hello:1.0.0"
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
resources:
limits:
cpu: 0.2
memory: "10Mi"
livenessProbe:
httpGet:
path: /healthz
port: 81
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /readiness
port: 81
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
deployments/auth.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth
spec:
replicas: 1
template:
metadata:
labels:
app: auth
track: stable
spec:
containers:
- name: auth
image: "udacity/example-auth:1.0.0"
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
resources:
limits:
cpu: 0.2
memory: "10Mi"
livenessProbe:
httpGet:
path: /healthz
port: 81
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /readiness
port: 81
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
deployments/frontend.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
template:
metadata:
labels:
app: frontend
track: stable
spec:
containers:
- name: nginx
image: "nginx:1.9.14"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
volumeMounts:
- name: "nginx-frontend-conf"
mountPath: "/etc/nginx/conf.d"
- name: "tls-certs"
mountPath: "/etc/tls"
volumes:
- name: "tls-certs"
secret:
secretName: "tls-certs"
- name: "nginx-frontend-conf"
configMap:
name: "nginx-frontend-conf"
items:
- key: "frontend.conf"
path: "frontend.conf"
deployments/nodehello.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodehello
spec:
replicas: 1
template:
metadata:
labels:
app: nodehello
track: stable
spec:
containers:
- name: nodehello
image: "thezebra/nodehello:0.0.2"
ports:
- name: http
containerPort: 80
resources:
limits:
cpu: 0.2
memory: "10Mi"
services/hello.yaml
kind: Service
apiVersion: v1
metadata:
name: "hello"
spec:
selector:
app: "hello"
ports:
- protocol: "TCP"
port: 80
targetPort: 80
services/auth.yaml
kind: Service
apiVersion: v1
metadata:
name: "auth"
spec:
selector:
app: "auth"
ports:
- protocol: "TCP"
port: 80
targetPort: 80
services/frontend.yaml
kind: Service
apiVersion: v1
metadata:
name: "frontend"
spec:
selector:
app: "frontend"
ports:
- protocol: "TCP"
port: 443
targetPort: 443
type: LoadBalancer
services/nodehello.yaml
kind: Service
apiVersion: v1
metadata:
name: "nodehello"
spec:
selector:
app: "nodehello"
ports:
- protocol: "TCP"
port: 80
targetPort: 80
This works perfectly :-)
$ curl -s http://frontend/
{"message":"Hello"}
$ curl -s http://frontend/login
authorization failed
$ curl -s http://frontend/nodehello
Hello World!
I suspect you might have updated the nginx-frontend-conf when you added /nodehello but have not restarted nginx. Pods won't pickup changed ConfigMaps automatically. Try:
kubectl delete pod -l app=frontend
Until versioned ConfigMaps happen there isn't a nicer solution.

Resources