Nginx Ingress Controller getting error in Node.js application with routes prefixed - node.js

I'm using Ingress Nginx Controller in my Kubernetes Cluster hosted on Google Kubernetes Engine. The application is a Node.js app.
When I integrated my app with Rollbar (logger service) I started to notice repetitive errors every ~15 seconds (154K times in one week).
Error: Cannot GET /
Print:
I think the reason is the fact that the my Node.js application uses the /v1 prefix in the routes, i.e the / route doesn'i exists.
PS: Rollbar is linked in Develop (local), Testing (Heroku) and Production (GKE) environments and the error only occurs in production.
My ingress file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-app
spec:
rules:
- host: api.company.com
http:
paths:
- path: /v1
backend:
serviceName: company-prod-v1-service
servicePort: 3000
Ingress documentation says something about / endpoint, but i don't understand very well.
I need to remove this error. Can any jedi masters help me fix this error?
Thanks in advance
[UPDATE]
Deployment File
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deploy-v1
labels:
app: app-v1
version: 1.10.0-2
spec:
selector:
matchLabels:
app: app-v1
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
minReadySeconds: 30
progressDeadlineSeconds: 60
template:
metadata:
labels:
app: app-v1
version: 1.10.0-2
spec:
serviceAccountName: gke-account
containers:
- name: app-container
image: registry.gitlab.com/company/app-back:1.10.0
lifecycle:
postStart:
exec:
command:
- "/bin/sh"
- "-c"
- >
if [ -f "$SECRETS_FOLDER/$APPLE_NOTIFICATION_KEY" ]; then
cp $SECRETS_FOLDER/$APPLE_NOTIFICATION_KEY $APP_FOLDER;
fi;
- name: cloud-sql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command:
- "/cloud_sql_proxy"
- "-instances=thermal-petal-283313:us-east1:app-instance=tcp:5432"
securityContext:
runAsNonRoot: true
lifecycle:
preStop:
exec:
command: ["/bin/sleep", "300"]
imagePullSecrets:
- name: registry-credentials
Ingress Controller Pod File (auto generate in installation command)
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-06-16T03:08:53Z"
generateName: ingress-nginx-controller
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
name: ingress-nginx-controller
namespace: ingress-nginx
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: ingress-nginx-controller
resourceVersion: "112544950"
selfLink: /api/v1/namespaces/ingress-nginx/pods/ingress-nginx-controller
spec:
containers:
image: k8s.gcr.io/ingress-nginx/controller:v0.46.0
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
- containerPort: 8443
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1

Related

Shiny proxy on AKS behind an Azure Application Gateway

I’ve been using shiny proxy on AKS for the past couple of months and it’s been fantastic, no problems at all, however, the need for a more secure setup has arised, and I have to use it behind an Azure Application Gateway (v2) with WAF and TLS certifcates (on the AGW).
The deployments happen with no problems whatsoever, but, upon trying to access the Application I always get a “404 Not Found”, also the health probes always throw "no Results" has anyone been through this before?
Here is my Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{lvappname}}-proxy
namespace: {{ns}}
labels:
app: {{lvappname}}
spec:
selector:
matchLabels:
run: {{lvappname}}-proxy
replicas: 1
template:
metadata:
labels:
run: {{lvappname}}-proxy
spec:
containers:
- name: {{lvappname}}-proxy
image: {{server}}/shiny-app/{{lvappname}}-shiny-proxy-application:{{TAG}}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
- name: kube-proxy-sidecar
image: {{server}}/shiny-app/{{lvappname}}-kube-proxy-sidecar:{{TAG}}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8001
And here is my Service and Ingress
kind: Service
apiVersion: v1
metadata:
name: {{lvappname}}-proxy
namespace: {{ns}}
labels:
app: {{lvappname}}
tier: frontend
spec:
selector:
app: {{lvappname}}-proxy
tier: frontend
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{lvappname}}-proxy
namespace: {{ns}}
annotations:
kubernetes.io/ingress.class: azure/application-gateway
cert-manager.io/cluster-issuer: letsencrypt-staging-application-gateway
appgw.ingress.kubernetes.io/health-probe-status-codes: "200-399,404"
appgw.ingress.kubernetes.io/health-probe-path: "/"
labels:
app: {{lvappname}}
spec:
rules:
- host: {{lvappname}}-{{lvstage}}.{{domain}}
http:
paths:
- path: /
backend:
service:
name: {{lvappname}}-proxy
port:
number: 8080
pathType: Prefix
tls:
- hosts:
- {{lvappname}}-{{lvstage}}.{{domain}}
secretName: {{lvappname}}-{{lvstage}}.{{domain}}-secret-name
and here is my shinyproxy configuration file
proxy:
port: 8080
authentication: none
landing-page: /app/{{lvappname}}
hide-navbar: true
container-backend: kubernetes
kubernetes:
namespace: {{ns}}
image-pull-policy: IfNotPresent
image-pull-secret: {{lvappname}}-secret
specs:
- id: {{lvappname}}
display-name: {{lvappname}} application
description: Application for {{lvappname}}
container-cmd: ["R", "-e", "shiny::runApp('/app/Shiny')"]
container-image: {{server}}/shiny-app/{{lvappname}}:{{TAG}}
server:
servlet.session.timeout: 3600
spring:
session:
store-type: redis
redis:
host: redis-leader
Any Help would be deeply appreciated
Thank you all in advance

Regex based routing setup for ingress in AKS

I am have a K8s cluster in Azure, in which I am wanting to host multiple web applications on with a single host. Each application has it's own service and deployment. How can I achieve something like the following routes?
MyApp.com
Partner1.MyApp.com
Partner2.MyApp.com
Here is what my yml file looks like currently:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp # the label for the pods and the deployments
spec:
containers:
- name: myapp
image: myimagename
imagePullPolicy: Always
ports:
- containerPort: 6666 # the application listens to this port
---
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 6666
targetPort: 6666
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: partner1-myapp
spec:
selector:
matchLabels:
app: partner1-myapp
template:
metadata:
labels:
app: partner1-myapp # the label for the pods and the deployments
spec:
containers:
- name: partner1-myapp
image: myimagename
imagePullPolicy: Always
ports:
- containerPort: 6669 # the application listens to this port
---
apiVersion: v1
kind: Service
metadata:
name: partner1-myapp
spec:
selector:
app: partner1-myapp
ports:
- protocol: TCP
port: 6669
targetPort: 6669
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: partner2-myapp
spec:
selector:
matchLabels:
app: partner2-myapp
template:
metadata:
labels:
app: partner2-myapp # the label for the pods and the deployments
spec:
containers:
- name: partner2-myapp
image: myimagename
imagePullPolicy: Always
ports:
- containerPort: 6672# the application listens to this port
---
apiVersion: v1
kind: Service
metadata:
name: partner2-myapp
spec:
selector:
app: partner2-myapp
ports:
- protocol: TCP
port: 6672
targetPort: 6672
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ing
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 70m
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: partner1-myapp
port:
number: 6669
- path: /(.*)
pathType: Prefix
backend:
service:
name: partner2-myapp
port:
number: 6672
- path: /(.*)
pathType: Prefix
backend:
service:
name: myapp
port:
number: 6666
---
What can I do to get the above routing?
Please run your two applications using kubectl
kubectl apply -f Partner1-MyApp.yaml --namespace ingress-basic
kubectl apply -f Partner2-MyApp.yaml --namespace ingress-basic
To setup for routing in your YAML file, If both application are running in Kubernetes cluster to route traffic in each application EXTERNAL_IP/static is needed to route the service, create a file name add this YAML code in below
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: partner-ingress-static
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /static/$2
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /static(/|$)(.*)
pathType: Prefix
backend:
service:
name: partner1-myapp
port:
number: 6669
To more in detail please refer this link :
Create an ingress controller in Azure Kubernetes Service (AKS)

Access Pod By Internet From AKS Private Cluster

I have a fully Private AKS Cluster, I have it setup on a private VNET which I access through an Azure Bastion to run kubectl commands. I have also set up a DevOps pipeline which uses a self hosted agent to run commands on the private cluster. All my pods and ingresses seem to be running fine. However, when I try to access my ingress using a hostname (by mapping the public ip) I am getting a 404 not found. When verifying against my public cluster setup, I don't see any issues. Can someone please shed some light on why I cannot access my pod which appears to be running fine?
Also, it seems I cannot access the external IP of the ingress even on the virtual machine which is on the virtual network. But I can run the kubectl commands and access the kubernetes dashboard.
---
apiVersion: v1
kind: Service
metadata:
namespace: app-auth
labels:
environment: staging
name: app-auth-staging # The name of the app
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: app-auth-staging
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-auth-staging
namespace: app-auth
labels:
app: app-auth-staging
environment: staging # The environment being used
app-role: api # The application type
tier: backend # The tier that this app represents
spec:
replicas: 1
selector:
matchLabels:
app: app-auth-staging
template:
metadata:
labels:
app: app-auth-staging
environment: staging
app-role: api
tier: backend
annotations:
build: _{Tag}_
spec:
containers:
- name: auth
image: auth.azurecr.io/auth:_{Tag}_ # Note: Do not modify this field.
imagePullPolicy: Always
env:
- name: ConnectionStrings__ZigzyAuth # Note: The appsettings value being replaced
valueFrom:
secretKeyRef:
name: connectionstrings
key: _{ConnectionString}_ # Note: This is an environmental variable, it is replaced accordingly in DevOps
ports:
- containerPort: 80
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 15
periodSeconds: 20
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: agentpool
operator: In
values:
- general
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "aks-provider"
nodePublishSecretRef:
name: aks-prod-credstore
imagePullSecrets:
- name: aks-prod-acrps
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-auth-staging-ingress-main # The name of the ingress, ex: app-auth-ingress-main
namespace: app-auth
labels:
environment: staging
annotations:
kubernetes.io/ingress.class: nginx
# nginx.ingress.kubernetes.io/enable-cors: "true"
# nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, OPTIONS"
# nginx.ingress.kubernetes.io/cors-allow-origin: "https://admin.example.com"
# nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
spec:
tls:
- hosts:
- stagingauth.app.com # Modify
- frontend.21.72.207.63.nip.io
- aksstagingauth.app.com
secretName: zigzypfxtls
rules:
- host: stagingauth.app.com
http:
paths:
- backend:
serviceName: zigzy-auth-staging # Modify
servicePort: 80
path: /
- host: frontend.21.72.207.63.nip.io
http:
paths:
- backend:
serviceName: app-auth-staging # Modify
servicePort: 80
path: /
- host: aksstagingauth.app.com
http:
paths:
- backend:
serviceName: app-auth-staging # Modify
servicePort: 80
path: /

Azure Kubernetes Service - Http Routing Giving 502

We are trying to host our API in AKS but we are hitting the same issue no matter what ingress option we use. We are running the latest version of kubernetes(1.11.2) on AKS with Http application routing configured. All the services and Pods are healthy according to the dashboard and the DNS Zone /healthz is returning 200, so that's working.
All of the api services are built using the latest version of dotnet core with the / route configured to return a status code 200.
Here's the services & deployments:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: accounts-api
spec:
replicas: 3
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: accounts-api
spec:
containers:
- name: accounts-api
# image: mycompany.azurecr.io/accounts.api:#{Build.BuildId}#
image: mycompany.azurecr.io/accounts.api:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: programs-api
spec:
replicas: 3
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: programs-api
spec:
containers:
- name: programs-api
# image: mycompany.azurecr.io/programs.api:#{Build.BuildId}#
image: mycompany.azurecr.io/programs.api:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: teams-api
spec:
replicas: 3
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: teams-api
spec:
containers:
- name: teams-api
# image: mycompany.azurecr.io/teams.api:#{Build.BuildId}#
image: mycompany.azurecr.io/teams.api:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: payments-api
spec:
replicas: 3
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: payments-api
spec:
containers:
- name: payments-api
# image: mycompany.azurecr.io/payments.api:#{Build.BuildId}#
image: mycompany.azurecr.io/payments.api:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: accounts-api-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: accounts-api
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: programs-api-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: programs-api
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: teams-api-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: teams-api
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: payments-api-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: payments-api
type: ClusterIP
---
Firstly, we tried to use Path Based Fanout, like so:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-api-ingress
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: mycompany-api.d6b1cf1ede294842b0ed.westeurope.aksapp.io
http:
paths:
- path: /accounts-api
backend:
serviceName: accounts-api-service
servicePort: 80
- path: /programs-api
backend:
serviceName: programs-api-service
servicePort: 80
- path: /teams-api
backend:
serviceName: teams-api-service
servicePort: 80
- path: /workouts-api
backend:
serviceName: payments-api-service
servicePort: 80
---
But we were hitting a 502 bad gateway for each path. We then tried aggregating the ingresses and assigning a host per service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-api-ingress
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: accounts-api.d6b1cf1ede294842b0ed.westeurope.aksapp.io
http:
paths:
- path: /
backend:
serviceName: accounts-api-service
servicePort: 80
- host: programs-api.d6b1cf1ede294842b0ed.westeurope.aksapp.io
http:
paths:
- path: /
backend:
serviceName: programs-api-service
servicePort: 80
- host: teams-api.d6b1cf1ede294842b0ed.westeurope.aksapp.io
http:
paths:
- path: /
backend:
serviceName: teams-api-service
servicePort: 80
- host: payments-api.d6b1cf1ede494842b0ed.westeurope.aksapp.io
http:
paths:
- path: /
backend:
serviceName: payments-api-service
servicePort: 80
---
The Azure DNS Zone is adding the correct txt and A records for each of the services but we are still hitting the 502.
From what we can tell from googling it seems as though the wiring of the ingress to the services is screwy, but as far as we can see our deploy script looks ok. Ideally we would like to use the path based fanout option, so what could the issue be? Base Path configuration?

loadbalancer service won't redirect to desired pod

I'm playing around with kubernetes and I've set up my environment with 4 deployments:
hello: basic "hello world" service
auth: provides authentication and encryption
frontend: an nginx reverse proxy which represents a single-point-of-entry from the outside and routes to the accurate pods internally
nodehello: basic "hello world" service, written in nodejs (this is what I contributed)
For the hello, auth and nodehello deployments I've set up each one internal service.
For the frontend deployment I've set up a load-balancer service which would be exposed to the outside world. It uses a config map nginx-frontend-conf to redirect to the appropriate pods and has the following contents:
upstream hello {
server hello.default.svc.cluster.local;
}
upstream auth {
server auth.default.svc.cluster.local;
}
upstream nodehello {
server nodehello.default.svc.cluster.local;
}
server {
listen 443;
ssl on;
ssl_certificate /etc/tls/cert.pem;
ssl_certificate_key /etc/tls/key.pem;
location / {
proxy_pass http://hello;
}
location /login {
proxy_pass http://auth;
}
location /nodehello {
proxy_pass http://nodehello;
}
}
When calling the frontend endpoint using curl -k https://<frontend-external-ip> I get routed to an available hello pod which is the expected behavior.
When calling https://<frontend-external-ip>/nodehello however I won't get routed to a nodehello pod, but instead to a hellopod again.
I suspect the upstream nodehello configuration to be the failing part. I'm not sure how service discovery works here, i.e. how the dns name nodehello.default.svc.cluster.local would be exposed. I'd appreciate an explanation on how it works and what I did wrong.
yaml files used
deployments/hello.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello
spec:
replicas: 3
template:
metadata:
labels:
app: hello
track: stable
spec:
containers:
- name: hello
image: "udacity/example-hello:1.0.0"
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
resources:
limits:
cpu: 0.2
memory: "10Mi"
livenessProbe:
httpGet:
path: /healthz
port: 81
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /readiness
port: 81
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
deployments/auth.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth
spec:
replicas: 1
template:
metadata:
labels:
app: auth
track: stable
spec:
containers:
- name: auth
image: "udacity/example-auth:1.0.0"
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
resources:
limits:
cpu: 0.2
memory: "10Mi"
livenessProbe:
httpGet:
path: /healthz
port: 81
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /readiness
port: 81
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
deployments/frontend.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
template:
metadata:
labels:
app: frontend
track: stable
spec:
containers:
- name: nginx
image: "nginx:1.9.14"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
volumeMounts:
- name: "nginx-frontend-conf"
mountPath: "/etc/nginx/conf.d"
- name: "tls-certs"
mountPath: "/etc/tls"
volumes:
- name: "tls-certs"
secret:
secretName: "tls-certs"
- name: "nginx-frontend-conf"
configMap:
name: "nginx-frontend-conf"
items:
- key: "frontend.conf"
path: "frontend.conf"
deployments/nodehello.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodehello
spec:
replicas: 1
template:
metadata:
labels:
app: nodehello
track: stable
spec:
containers:
- name: nodehello
image: "thezebra/nodehello:0.0.2"
ports:
- name: http
containerPort: 80
resources:
limits:
cpu: 0.2
memory: "10Mi"
services/hello.yaml
kind: Service
apiVersion: v1
metadata:
name: "hello"
spec:
selector:
app: "hello"
ports:
- protocol: "TCP"
port: 80
targetPort: 80
services/auth.yaml
kind: Service
apiVersion: v1
metadata:
name: "auth"
spec:
selector:
app: "auth"
ports:
- protocol: "TCP"
port: 80
targetPort: 80
services/frontend.yaml
kind: Service
apiVersion: v1
metadata:
name: "frontend"
spec:
selector:
app: "frontend"
ports:
- protocol: "TCP"
port: 443
targetPort: 443
type: LoadBalancer
services/nodehello.yaml
kind: Service
apiVersion: v1
metadata:
name: "nodehello"
spec:
selector:
app: "nodehello"
ports:
- protocol: "TCP"
port: 80
targetPort: 80
This works perfectly :-)
$ curl -s http://frontend/
{"message":"Hello"}
$ curl -s http://frontend/login
authorization failed
$ curl -s http://frontend/nodehello
Hello World!
I suspect you might have updated the nginx-frontend-conf when you added /nodehello but have not restarted nginx. Pods won't pickup changed ConfigMaps automatically. Try:
kubectl delete pod -l app=frontend
Until versioned ConfigMaps happen there isn't a nicer solution.

Resources