Nginx-Ingress for TCP entrypoint not working - logstash

I am using Nginx as Kubernetes Ingress controller. After following this simple example I was able to setup this example
Now I am trying to setup TCP entrypoint for logstash with following config
Logstash :
apiVersion: v1
kind: Secret
metadata:
name: logstash-secret
namespace: kube-logging
type: Opaque
data:
tls.crt: "<base64 encoded>" #For logstash.test.domain.com
tls.key: "<base64 encoded>" #For logstash.test.domain.com
---
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
namespace: kube-logging
labels:
app: logstash
data:
syslog.conf: |-
input {
tcp {
port => 5050
type => syslog
}
}
filter {
grok {
match => {"message" => "%{SYSLOGLINE}"}
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch:9200"] #elasticsearch running in same namespace (kube-logging)
index => "syslog-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: logstash
namespace: kube-logging
labels:
app: logstash
spec:
replicas: 1
selector:
matchLabels:
app: logstash
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: logstash
spec:
#serviceAccountName: logstash
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:7.2.1
imagePullPolicy: Always
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
ports:
- name: logstash
containerPort: 5050
protocol: TCP
securityContext:
runAsUser: 0
volumeMounts:
- name: config
mountPath: /usr/share/logstash/pipeline/syslog.conf
readOnly: true
subPath: syslog.conf
volumes:
- name: config
configMap:
defaultMode: 0600
name: logstash-config
---
kind: Service
apiVersion: v1
metadata:
name: logstash
namespace: kube-logging
labels:
app: logstash
spec:
ports:
- name: tcp-port
protocol: TCP
port: 5050
targetPort: 5050
selector:
app: logstash
Nginx-Ingress:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress
namespace: kube-ingress
spec:
replicas: 1
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
serviceAccountName: nginx-ingress
containers:
- image: nginx/nginx-ingress:1.5.7
imagePullPolicy: Always
name: nginx-ingress
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: tcp5050
containerPort: 5050
securityContext:
allowPrivilegeEscalation: true
runAsUser: 101 #nginx
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
- -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
- -v=3 # Enables extensive logging. Useful for troubleshooting.
#- -report-ingress-status
#- -external-service=nginx-ingress
#- -enable-leader-election
#- -enable-prometheus-metrics
#- -enable-custom-resources
LoadBalancer :
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-external
namespace: kube-ingress
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
- name: tcp5050
protocol: TCP
port: 5050
targetPort: 5050
selector:
app: nginx-ingress
Ingress :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: logstash-ingress
namespace: kube-logging
spec:
tls:
- hosts:
- logstash.test.domain.com
secretName: logstash-secret #This has self-signed cert for logstash.test.domain.com
rules:
- host: logstash.test.domain.com
http:
paths:
- path: /
backend:
serviceName: logstash
servicePort: 5050
With this config it shows following ,
NAME HOSTS ADDRESS PORTS AGE
logstash-ingress logstash.test.domain.com 80, 443 79m
Why port 5050 not listed here ?
Just want to expose logstash service through public endpoint. When I use
openssl s_client -connect logstash.kube-logging.svc.cluster.local:5050 within the cluster I get
$ openssl s_client -connect logstash.kube-logging.svc.cluster.local:5050
CONNECTED(00000005)
But from outside of the cluster openssl s_client -connect logstash.test.domain.com:5050 I get
$ openssl s_client -connect logstash.test.domain.com:5050
connect: Connection refused
connect:errno=61
and
$ openssl s_client -cert logstash_test_domain_com.crt -key logstash_test_domain_com.key -servername logstash.test.domain.com:5050
connect: Connection refused
connect:errno=61
What I need to do to get this working ?

It seems like you are kind of confused. So let's start by ordering your services and ingress.
First, there are 3 types of services in kubernetes. ClusterIP which allows you to expose your deployments internally in k8s. Nodeport which is the same as ClusterIP but also exposes your deployment through every node external IP and a PORT which is in the range ~30K-32K. Finally there is the LoadBalancer service which is the same as ClusterIP but also exposes your app in a specific external IP address assigned by the cloud provider LoadBalancer.
The NodePort service you created will make logstash accessible through every node external IP in a random port in the range 30K to 32K; find the port running kubectl get services | grep nginx-ingress and check the last column. To get your node's external ip addresses run kubectl get node -o wide. The LoadBalancer service you created will make logstash accessible through an external IP address in the port 5050. To find out the IP run kubectl get services | grep nginx-ingress-external. Finally, you have also created an ingress resource to reach logstash. For this you have defined a host which given TLS will be accessible in port 443 and inbound traffic there will be redirected to logstash's service type ClusterIP in port 5050. So there you go you have 3 ways to reach logstash. I would go for the LoadBalancer given that it is a specific port.

Related

Shiny proxy on AKS behind an Azure Application Gateway

I’ve been using shiny proxy on AKS for the past couple of months and it’s been fantastic, no problems at all, however, the need for a more secure setup has arised, and I have to use it behind an Azure Application Gateway (v2) with WAF and TLS certifcates (on the AGW).
The deployments happen with no problems whatsoever, but, upon trying to access the Application I always get a “404 Not Found”, also the health probes always throw "no Results" has anyone been through this before?
Here is my Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{lvappname}}-proxy
namespace: {{ns}}
labels:
app: {{lvappname}}
spec:
selector:
matchLabels:
run: {{lvappname}}-proxy
replicas: 1
template:
metadata:
labels:
run: {{lvappname}}-proxy
spec:
containers:
- name: {{lvappname}}-proxy
image: {{server}}/shiny-app/{{lvappname}}-shiny-proxy-application:{{TAG}}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
- name: kube-proxy-sidecar
image: {{server}}/shiny-app/{{lvappname}}-kube-proxy-sidecar:{{TAG}}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8001
And here is my Service and Ingress
kind: Service
apiVersion: v1
metadata:
name: {{lvappname}}-proxy
namespace: {{ns}}
labels:
app: {{lvappname}}
tier: frontend
spec:
selector:
app: {{lvappname}}-proxy
tier: frontend
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{lvappname}}-proxy
namespace: {{ns}}
annotations:
kubernetes.io/ingress.class: azure/application-gateway
cert-manager.io/cluster-issuer: letsencrypt-staging-application-gateway
appgw.ingress.kubernetes.io/health-probe-status-codes: "200-399,404"
appgw.ingress.kubernetes.io/health-probe-path: "/"
labels:
app: {{lvappname}}
spec:
rules:
- host: {{lvappname}}-{{lvstage}}.{{domain}}
http:
paths:
- path: /
backend:
service:
name: {{lvappname}}-proxy
port:
number: 8080
pathType: Prefix
tls:
- hosts:
- {{lvappname}}-{{lvstage}}.{{domain}}
secretName: {{lvappname}}-{{lvstage}}.{{domain}}-secret-name
and here is my shinyproxy configuration file
proxy:
port: 8080
authentication: none
landing-page: /app/{{lvappname}}
hide-navbar: true
container-backend: kubernetes
kubernetes:
namespace: {{ns}}
image-pull-policy: IfNotPresent
image-pull-secret: {{lvappname}}-secret
specs:
- id: {{lvappname}}
display-name: {{lvappname}} application
description: Application for {{lvappname}}
container-cmd: ["R", "-e", "shiny::runApp('/app/Shiny')"]
container-image: {{server}}/shiny-app/{{lvappname}}:{{TAG}}
server:
servlet.session.timeout: 3600
spring:
session:
store-type: redis
redis:
host: redis-leader
Any Help would be deeply appreciated
Thank you all in advance

Regex based routing setup for ingress in AKS

I am have a K8s cluster in Azure, in which I am wanting to host multiple web applications on with a single host. Each application has it's own service and deployment. How can I achieve something like the following routes?
MyApp.com
Partner1.MyApp.com
Partner2.MyApp.com
Here is what my yml file looks like currently:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp # the label for the pods and the deployments
spec:
containers:
- name: myapp
image: myimagename
imagePullPolicy: Always
ports:
- containerPort: 6666 # the application listens to this port
---
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 6666
targetPort: 6666
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: partner1-myapp
spec:
selector:
matchLabels:
app: partner1-myapp
template:
metadata:
labels:
app: partner1-myapp # the label for the pods and the deployments
spec:
containers:
- name: partner1-myapp
image: myimagename
imagePullPolicy: Always
ports:
- containerPort: 6669 # the application listens to this port
---
apiVersion: v1
kind: Service
metadata:
name: partner1-myapp
spec:
selector:
app: partner1-myapp
ports:
- protocol: TCP
port: 6669
targetPort: 6669
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: partner2-myapp
spec:
selector:
matchLabels:
app: partner2-myapp
template:
metadata:
labels:
app: partner2-myapp # the label for the pods and the deployments
spec:
containers:
- name: partner2-myapp
image: myimagename
imagePullPolicy: Always
ports:
- containerPort: 6672# the application listens to this port
---
apiVersion: v1
kind: Service
metadata:
name: partner2-myapp
spec:
selector:
app: partner2-myapp
ports:
- protocol: TCP
port: 6672
targetPort: 6672
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ing
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 70m
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: partner1-myapp
port:
number: 6669
- path: /(.*)
pathType: Prefix
backend:
service:
name: partner2-myapp
port:
number: 6672
- path: /(.*)
pathType: Prefix
backend:
service:
name: myapp
port:
number: 6666
---
What can I do to get the above routing?
Please run your two applications using kubectl
kubectl apply -f Partner1-MyApp.yaml --namespace ingress-basic
kubectl apply -f Partner2-MyApp.yaml --namespace ingress-basic
To setup for routing in your YAML file, If both application are running in Kubernetes cluster to route traffic in each application EXTERNAL_IP/static is needed to route the service, create a file name add this YAML code in below
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: partner-ingress-static
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /static/$2
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /static(/|$)(.*)
pathType: Prefix
backend:
service:
name: partner1-myapp
port:
number: 6669
To more in detail please refer this link :
Create an ingress controller in Azure Kubernetes Service (AKS)

Access Pod By Internet From AKS Private Cluster

I have a fully Private AKS Cluster, I have it setup on a private VNET which I access through an Azure Bastion to run kubectl commands. I have also set up a DevOps pipeline which uses a self hosted agent to run commands on the private cluster. All my pods and ingresses seem to be running fine. However, when I try to access my ingress using a hostname (by mapping the public ip) I am getting a 404 not found. When verifying against my public cluster setup, I don't see any issues. Can someone please shed some light on why I cannot access my pod which appears to be running fine?
Also, it seems I cannot access the external IP of the ingress even on the virtual machine which is on the virtual network. But I can run the kubectl commands and access the kubernetes dashboard.
---
apiVersion: v1
kind: Service
metadata:
namespace: app-auth
labels:
environment: staging
name: app-auth-staging # The name of the app
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: app-auth-staging
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-auth-staging
namespace: app-auth
labels:
app: app-auth-staging
environment: staging # The environment being used
app-role: api # The application type
tier: backend # The tier that this app represents
spec:
replicas: 1
selector:
matchLabels:
app: app-auth-staging
template:
metadata:
labels:
app: app-auth-staging
environment: staging
app-role: api
tier: backend
annotations:
build: _{Tag}_
spec:
containers:
- name: auth
image: auth.azurecr.io/auth:_{Tag}_ # Note: Do not modify this field.
imagePullPolicy: Always
env:
- name: ConnectionStrings__ZigzyAuth # Note: The appsettings value being replaced
valueFrom:
secretKeyRef:
name: connectionstrings
key: _{ConnectionString}_ # Note: This is an environmental variable, it is replaced accordingly in DevOps
ports:
- containerPort: 80
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 15
periodSeconds: 20
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: agentpool
operator: In
values:
- general
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "aks-provider"
nodePublishSecretRef:
name: aks-prod-credstore
imagePullSecrets:
- name: aks-prod-acrps
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-auth-staging-ingress-main # The name of the ingress, ex: app-auth-ingress-main
namespace: app-auth
labels:
environment: staging
annotations:
kubernetes.io/ingress.class: nginx
# nginx.ingress.kubernetes.io/enable-cors: "true"
# nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, OPTIONS"
# nginx.ingress.kubernetes.io/cors-allow-origin: "https://admin.example.com"
# nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
spec:
tls:
- hosts:
- stagingauth.app.com # Modify
- frontend.21.72.207.63.nip.io
- aksstagingauth.app.com
secretName: zigzypfxtls
rules:
- host: stagingauth.app.com
http:
paths:
- backend:
serviceName: zigzy-auth-staging # Modify
servicePort: 80
path: /
- host: frontend.21.72.207.63.nip.io
http:
paths:
- backend:
serviceName: app-auth-staging # Modify
servicePort: 80
path: /
- host: aksstagingauth.app.com
http:
paths:
- backend:
serviceName: app-auth-staging # Modify
servicePort: 80
path: /

Why am I getting a SQSError: 404 when trying to connect using Boto to an ElasticMQ service that is behind Ambassador?

I have an elasticmq docker container which is deployed as a service using Kubernetes. Furthermore, this service is exposed to external users by way of Ambassador.
Here is the yaml file for the service.
---
kind: Service
apiVersion: v1
metadata:
name: elasticmq
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: elasticmq
prefix: /
host: elasticmq.localhost.com
service: elasticmq:9324
spec:
selector:
app: elasticmq
ports:
- port: 9324
protocol: TCP
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: elasticmq
labels:
app: elasticmq
spec:
replicas: 1
selector:
matchLabels:
app: elasticmq
template:
metadata:
labels:
app: elasticmq
spec:
containers:
- name: elasticmq
image: elasticmq
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9324
livenessProbe:
exec:
command:
- /bin/bash
- -c
- nc -zv localhost 9324 -w 1
initialDelaySeconds: 60
periodSeconds: 5
volumeMounts:
- name: elastimq-conf-volume
mountPath: /elasticmq/elasticmq.conf
volumes:
- name: elastimq-conf-volume
hostPath:
path: /path/elasticmq.conf
Now I can check that the elasticmq container is healthy and Ambassador worked by doing a curl:
$ curl elasticmq.localhost.com?Action=ListQueues&Version=2012-11-05
[1] 10355
<ListQueuesResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/">
<ListQueuesResult>
</ListQueuesResult>
<ResponseMetadata>
<RequestId>00000000-0000-0000-0000-000000000000</RequestId>
</ResponseMetadata>
</ListQueuesResponse>[1]+ Done
On the other hand, when I try to do the same thing using Boto3, I get a SQSError: 404 not found.
This is my Python script:
import boto.sqs.connection
conn = boto.sqs.connection
c = conn.SQSConnection(proxy_port=80, proxy='elasticmq.localhost.com', is_secure=False, aws_access_key_id='x', aws_secret_access_key='x'
c.get_all_queues('')
I thought it had to do with the outside host specified in elasticmq.conf, so I changed that to this:
include classpath("application.conf")
// What is the outside visible address of this ElasticMQ node (used by rest-sqs)
node-address {
protocol = http
host = elasticmq.localhost.com
port = 80
context-path = ""
}
rest-sqs {
enabled = true
bind-port = 9324
bind-hostname = "0.0.0.0"
// Possible values: relaxed, strict
sqs-limits = relaxed
}
I thought changing the elasticmq conf would work, but it doesn't. How can I get this to work?

loadbalancer service won't redirect to desired pod

I'm playing around with kubernetes and I've set up my environment with 4 deployments:
hello: basic "hello world" service
auth: provides authentication and encryption
frontend: an nginx reverse proxy which represents a single-point-of-entry from the outside and routes to the accurate pods internally
nodehello: basic "hello world" service, written in nodejs (this is what I contributed)
For the hello, auth and nodehello deployments I've set up each one internal service.
For the frontend deployment I've set up a load-balancer service which would be exposed to the outside world. It uses a config map nginx-frontend-conf to redirect to the appropriate pods and has the following contents:
upstream hello {
server hello.default.svc.cluster.local;
}
upstream auth {
server auth.default.svc.cluster.local;
}
upstream nodehello {
server nodehello.default.svc.cluster.local;
}
server {
listen 443;
ssl on;
ssl_certificate /etc/tls/cert.pem;
ssl_certificate_key /etc/tls/key.pem;
location / {
proxy_pass http://hello;
}
location /login {
proxy_pass http://auth;
}
location /nodehello {
proxy_pass http://nodehello;
}
}
When calling the frontend endpoint using curl -k https://<frontend-external-ip> I get routed to an available hello pod which is the expected behavior.
When calling https://<frontend-external-ip>/nodehello however I won't get routed to a nodehello pod, but instead to a hellopod again.
I suspect the upstream nodehello configuration to be the failing part. I'm not sure how service discovery works here, i.e. how the dns name nodehello.default.svc.cluster.local would be exposed. I'd appreciate an explanation on how it works and what I did wrong.
yaml files used
deployments/hello.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello
spec:
replicas: 3
template:
metadata:
labels:
app: hello
track: stable
spec:
containers:
- name: hello
image: "udacity/example-hello:1.0.0"
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
resources:
limits:
cpu: 0.2
memory: "10Mi"
livenessProbe:
httpGet:
path: /healthz
port: 81
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /readiness
port: 81
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
deployments/auth.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth
spec:
replicas: 1
template:
metadata:
labels:
app: auth
track: stable
spec:
containers:
- name: auth
image: "udacity/example-auth:1.0.0"
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
resources:
limits:
cpu: 0.2
memory: "10Mi"
livenessProbe:
httpGet:
path: /healthz
port: 81
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /readiness
port: 81
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
deployments/frontend.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
template:
metadata:
labels:
app: frontend
track: stable
spec:
containers:
- name: nginx
image: "nginx:1.9.14"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
volumeMounts:
- name: "nginx-frontend-conf"
mountPath: "/etc/nginx/conf.d"
- name: "tls-certs"
mountPath: "/etc/tls"
volumes:
- name: "tls-certs"
secret:
secretName: "tls-certs"
- name: "nginx-frontend-conf"
configMap:
name: "nginx-frontend-conf"
items:
- key: "frontend.conf"
path: "frontend.conf"
deployments/nodehello.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodehello
spec:
replicas: 1
template:
metadata:
labels:
app: nodehello
track: stable
spec:
containers:
- name: nodehello
image: "thezebra/nodehello:0.0.2"
ports:
- name: http
containerPort: 80
resources:
limits:
cpu: 0.2
memory: "10Mi"
services/hello.yaml
kind: Service
apiVersion: v1
metadata:
name: "hello"
spec:
selector:
app: "hello"
ports:
- protocol: "TCP"
port: 80
targetPort: 80
services/auth.yaml
kind: Service
apiVersion: v1
metadata:
name: "auth"
spec:
selector:
app: "auth"
ports:
- protocol: "TCP"
port: 80
targetPort: 80
services/frontend.yaml
kind: Service
apiVersion: v1
metadata:
name: "frontend"
spec:
selector:
app: "frontend"
ports:
- protocol: "TCP"
port: 443
targetPort: 443
type: LoadBalancer
services/nodehello.yaml
kind: Service
apiVersion: v1
metadata:
name: "nodehello"
spec:
selector:
app: "nodehello"
ports:
- protocol: "TCP"
port: 80
targetPort: 80
This works perfectly :-)
$ curl -s http://frontend/
{"message":"Hello"}
$ curl -s http://frontend/login
authorization failed
$ curl -s http://frontend/nodehello
Hello World!
I suspect you might have updated the nginx-frontend-conf when you added /nodehello but have not restarted nginx. Pods won't pickup changed ConfigMaps automatically. Try:
kubectl delete pod -l app=frontend
Until versioned ConfigMaps happen there isn't a nicer solution.

Resources