Kubernetes node is not accessible on port 80 and 443 - linux

I deployed a bunch of services and with all of them I have the same problem: the defined port (e.g. 80 and 443) is not accessible, but anyway the automatically assigned node port.
The following service definition is exported from the first service:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "traefik",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/services/traefik",
"uid": "70df3a55-422c-11e8-b7c0-b827eb28c626",
"resourceVersion": "1531399",
"creationTimestamp": "2018-04-17T10:45:27Z",
"labels": {
"app": "traefik",
"chart": "traefik-1.28.1",
"heritage": "Tiller",
"release": "traefik"
}
},
"spec": {
"ports": [
{
"name": "http",
"protocol": "TCP",
"port": 80,
"targetPort": "http",
"nodePort": 31822
},
{
"name": "https",
"protocol": "TCP",
"port": 443,
"targetPort": "httpn",
"nodePort": 32638
}
],
"selector": {
"app": "traefik",
"release": "traefik"
},
"clusterIP": "10.109.80.108",
"type": "LoadBalancer",
"sessionAffinity": "None",
"externalTrafficPolicy": "Cluster"
},
"status": {
"loadBalancer": {}
}
}
any idea how i can reach this service with http://node-ip-addr:80 and the other service with http://node-ip-addr:443?

The ports that you defined for your services --in this case 443 and 80-- are only reachable from within the cluster. You can try to call your service from another pod (which runs busy box, for example) with curl http://traefik.kube-system.svc.cluster.local or http://.
If you want to access your services from outside the cluster (which is your use case you need to expose your service as one of the following
NodePort
LoadBalancer
ExternalName
You chose NodePort which means that every node of the cluster listens for requests on a specific port (in your case 31822 for http and 32638 for https) which will then be delegated to your service. This is why http://node-ip-addr:31822 should work for your provided service config.
To adapt your configuration according to your requirements you must set "nodePort": 80 which in turn will reserve port 80 on every cluster node to delegate to you service. This is generally not the best idea. You would rather keep the port as currently defined and add a proxy server or a load balancer in front of your cluster which would then listen for port 80 and forward to one of the nodes to port 31822 for your service.
For more information on publishing services please refer to the docs at Kubernetes docs

Check the following working example.
Note:
The container listens at port 4000 which is specified as containerPort in the Deployment
The Service maps the container port 4000 (targetPort) to port 80
The Ingress is now pointing to servicePort 80
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: testui-deploy
spec:
replicas: 1
revisionHistoryLimit: 1
selector:
matchLabels:
app: testui
template:
metadata:
labels:
app: testui
spec:
containers:
- name: testui
image: gcr.io/test2018/testui:latest
ports:
- containerPort: 4000
---
apiVersion: v1
kind: Service
metadata:
name: testui-svc
labels:
app: testui-svc
spec:
type: NodePort
selector:
app: testui
ports:
- protocol: TCP
port: 80
targetPort: 4000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ing
annotations:
kubernetes.io/ingress.global-static-ip-name: test-ip
spec:
backend:
serviceName: testui-svc
servicePort: 80

Related

Azure Kubernetes Kong Ingress Timeout Issue

I have one API application in Azure Kubernetes Service. Kong gateway is used for API. My problem is that one endpoint may take more than one minute to give response. But after one minute, it throws error message below
{
"message": "The upstream server is timing out"
}
I use GitHub Actions for deployment. Below is my yaml for Kong part
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: docuploadapi
namespace: ocr
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod-kong
kubernetes.io/ingress.class: kong
konghq.com/protocols: https
konghq.com/https-redirect-status-code: "301"
konghq.com/plugins: upstream-timeout-example
# konghq.com/strip-path: "true"
nginx.org/location-snippets: |
add_header X-Forwarded-Proto https;
spec:
rules:
- host: domain
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: docuploadapi
port:
number: 80
- path: /api
pathType: ImplementationSpecific
backend:
service:
name: docuploadapi-kong
port:
number: 80
- path: /api/Admin
pathType: ImplementationSpecific
backend:
service:
name: docuploadapi-kong-admin
port:
number: 80
tls:
- hosts:
- domain
secretName: docupload.dev-secret
---
#network policies
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: docupload-allowingress
namespace: ocr
spec:
podSelector:
matchLabels:
app: docuploadapi
ingress:
- ports:
- port: 80
from:
- podSelector: {}
# matchLabels:
# app.kubernetes.io/instance: nginx-ingress
- namespaceSelector: {}
# matchLabels:
# name: ingress
---
kind: KongPlugin
apiVersion: configuration.konghq.com/v1
metadata:
name: upstream-timeout-example
config:
connect_timeout: 4000
send_timeout: 5000
read_timeout: 5000
I read this document but when I send request it still gives same error message.
How can I increase this timeout to 2 minutes?

EKS ALB Ingress address is given, but loading address does not work

Hi I've successfully setup an ALB controller and an ingress to one of my containers.
But when I route to my address in a browser it gives me this error
Is there something more that I need to do to connect to this? I was reading some AWS guides and I think that I should be able to route to this address without doing anything in Route53?
Below is the code for my helm chart
datahub-frontend:
enabled: true
image:
repository: linkedin/datahub-frontend-react
tag: "v0.8.31"
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-southeast-2:601628467906:certificate/4a862e82-d098-4e27-9eb7-c8221df9e0cd
alb.ingress.kubernetes.io/inbound-cidrs: 0.0.0.0/0
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
hosts:
- host: datahub.hughnguyen.link
redirectPaths:
- path: /*
name: ssl-redirect
port: use-annotation
paths:
- /*

Nginx-Ingress for TCP entrypoint not working

I am using Nginx as Kubernetes Ingress controller. After following this simple example I was able to setup this example
Now I am trying to setup TCP entrypoint for logstash with following config
Logstash :
apiVersion: v1
kind: Secret
metadata:
name: logstash-secret
namespace: kube-logging
type: Opaque
data:
tls.crt: "<base64 encoded>" #For logstash.test.domain.com
tls.key: "<base64 encoded>" #For logstash.test.domain.com
---
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
namespace: kube-logging
labels:
app: logstash
data:
syslog.conf: |-
input {
tcp {
port => 5050
type => syslog
}
}
filter {
grok {
match => {"message" => "%{SYSLOGLINE}"}
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch:9200"] #elasticsearch running in same namespace (kube-logging)
index => "syslog-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: logstash
namespace: kube-logging
labels:
app: logstash
spec:
replicas: 1
selector:
matchLabels:
app: logstash
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: logstash
spec:
#serviceAccountName: logstash
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:7.2.1
imagePullPolicy: Always
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
ports:
- name: logstash
containerPort: 5050
protocol: TCP
securityContext:
runAsUser: 0
volumeMounts:
- name: config
mountPath: /usr/share/logstash/pipeline/syslog.conf
readOnly: true
subPath: syslog.conf
volumes:
- name: config
configMap:
defaultMode: 0600
name: logstash-config
---
kind: Service
apiVersion: v1
metadata:
name: logstash
namespace: kube-logging
labels:
app: logstash
spec:
ports:
- name: tcp-port
protocol: TCP
port: 5050
targetPort: 5050
selector:
app: logstash
Nginx-Ingress:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress
namespace: kube-ingress
spec:
replicas: 1
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
serviceAccountName: nginx-ingress
containers:
- image: nginx/nginx-ingress:1.5.7
imagePullPolicy: Always
name: nginx-ingress
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: tcp5050
containerPort: 5050
securityContext:
allowPrivilegeEscalation: true
runAsUser: 101 #nginx
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
- -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
- -v=3 # Enables extensive logging. Useful for troubleshooting.
#- -report-ingress-status
#- -external-service=nginx-ingress
#- -enable-leader-election
#- -enable-prometheus-metrics
#- -enable-custom-resources
LoadBalancer :
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-external
namespace: kube-ingress
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
- name: tcp5050
protocol: TCP
port: 5050
targetPort: 5050
selector:
app: nginx-ingress
Ingress :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: logstash-ingress
namespace: kube-logging
spec:
tls:
- hosts:
- logstash.test.domain.com
secretName: logstash-secret #This has self-signed cert for logstash.test.domain.com
rules:
- host: logstash.test.domain.com
http:
paths:
- path: /
backend:
serviceName: logstash
servicePort: 5050
With this config it shows following ,
NAME HOSTS ADDRESS PORTS AGE
logstash-ingress logstash.test.domain.com 80, 443 79m
Why port 5050 not listed here ?
Just want to expose logstash service through public endpoint. When I use
openssl s_client -connect logstash.kube-logging.svc.cluster.local:5050 within the cluster I get
$ openssl s_client -connect logstash.kube-logging.svc.cluster.local:5050
CONNECTED(00000005)
But from outside of the cluster openssl s_client -connect logstash.test.domain.com:5050 I get
$ openssl s_client -connect logstash.test.domain.com:5050
connect: Connection refused
connect:errno=61
and
$ openssl s_client -cert logstash_test_domain_com.crt -key logstash_test_domain_com.key -servername logstash.test.domain.com:5050
connect: Connection refused
connect:errno=61
What I need to do to get this working ?
It seems like you are kind of confused. So let's start by ordering your services and ingress.
First, there are 3 types of services in kubernetes. ClusterIP which allows you to expose your deployments internally in k8s. Nodeport which is the same as ClusterIP but also exposes your deployment through every node external IP and a PORT which is in the range ~30K-32K. Finally there is the LoadBalancer service which is the same as ClusterIP but also exposes your app in a specific external IP address assigned by the cloud provider LoadBalancer.
The NodePort service you created will make logstash accessible through every node external IP in a random port in the range 30K to 32K; find the port running kubectl get services | grep nginx-ingress and check the last column. To get your node's external ip addresses run kubectl get node -o wide. The LoadBalancer service you created will make logstash accessible through an external IP address in the port 5050. To find out the IP run kubectl get services | grep nginx-ingress-external. Finally, you have also created an ingress resource to reach logstash. For this you have defined a host which given TLS will be accessible in port 443 and inbound traffic there will be redirected to logstash's service type ClusterIP in port 5050. So there you go you have 3 ways to reach logstash. I would go for the LoadBalancer given that it is a specific port.

How to start kubernetes service on NodePort outside service-node-port-range default range?

I've been trying to start kubernetes-dashboard (and eventualy other services) on a NodePort outside the default port range with little success,
here is my setup:
Cloud provider: Azure (Not azure container service)
OS: CentOS 7
here is what I have tried:
Update the host
$ yum update
Install kubeadm
$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
$ setenforce 0
$ yum install -y docker kubelet kubeadm kubectl kubernetes-cni
$ systemctl enable docker && systemctl start docker
$ systemctl enable kubelet && systemctl start kubelet
Start the cluster with kubeadm
$ kubeadm init
Allow runing containers on master node, because we have a single node cluster
$ kubectl taint nodes --all dedicated-
Install a pod network
$ kubectl apply -f https://git.io/weave-kube
Our kubernetes-dashboard Deployment (# ~/kubernetes-dashboard.yaml
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI.
#
# Example usage: kubectl create -f <this_file>
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: kubernetes-dashboard
template:
metadata:
labels:
app: kubernetes-dashboard
# Comment the following annotation if Dashboard must not be deployed on master
annotations:
scheduler.alpha.kubernetes.io/tolerations: |
[
{
"key": "dedicated",
"operator": "Equal",
"value": "master",
"effect": "NoSchedule"
}
]
spec:
containers:
- name: kubernetes-dashboard
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1
imagePullPolicy: Always
ports:
- containerPort: 9090
protocol: TCP
args:
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 8880
targetPort: 9090
nodePort: 8880
selector:
app: kubernetes-dashboard
Create our Deployment
$ kubectl create -f ~/kubernetes-dashboard.yaml
deployment "kubernetes-dashboard" created
The Service "kubernetes-dashboard" is invalid: spec.ports[0].nodePort: Invalid value: 8880: provided port is not in the valid range. The range of valid ports is 30000-32767
I found out that to change the range of valid ports I could set service-node-port-range option on kube-apiserver to allow a different port range,
so I tried this:
$ kubectl get po --namespace=kube-system
NAME READY STATUS RESTARTS AGE
dummy-2088944543-lr2zb 1/1 Running 0 31m
etcd-test2-highr 1/1 Running 0 31m
kube-apiserver-test2-highr 1/1 Running 0 31m
kube-controller-manager-test2-highr 1/1 Running 2 31m
kube-discovery-1769846148-wmbhb 1/1 Running 0 31m
kube-dns-2924299975-8vwjm 4/4 Running 0 31m
kube-proxy-0ls9c 1/1 Running 0 31m
kube-scheduler-test2-highr 1/1 Running 2 31m
kubernetes-dashboard-3203831700-qrvdn 1/1 Running 0 22s
weave-net-m9rxh 2/2 Running 0 31m
Add "--service-node-port-range=8880-8880" to kube-apiserver-test2-highr
$ kubectl edit po kube-apiserver-test2-highr --namespace=kube-system
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kube-apiserver",
"namespace": "kube-system",
"creationTimestamp": null,
"labels": {
"component": "kube-apiserver",
"tier": "control-plane"
}
},
"spec": {
"volumes": [
{
"name": "k8s",
"hostPath": {
"path": "/etc/kubernetes"
}
},
{
"name": "certs",
"hostPath": {
"path": "/etc/ssl/certs"
}
},
{
"name": "pki",
"hostPath": {
"path": "/etc/pki"
}
}
],
"containers": [
{
"name": "kube-apiserver",
"image": "gcr.io/google_containers/kube-apiserver-amd64:v1.5.3",
"command": [
"kube-apiserver",
"--insecure-bind-address=127.0.0.1",
"--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota",
"--service-cluster-ip-range=10.96.0.0/12",
"--service-node-port-range=8880-8880",
"--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem",
"--client-ca-file=/etc/kubernetes/pki/ca.pem",
"--tls-cert-file=/etc/kubernetes/pki/apiserver.pem",
"--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem",
"--token-auth-file=/etc/kubernetes/pki/tokens.csv",
"--secure-port=6443",
"--allow-privileged",
"--advertise-address=100.112.226.5",
"--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname",
"--anonymous-auth=false",
"--etcd-servers=http://127.0.0.1:2379"
],
"resources": {
"requests": {
"cpu": "250m"
}
},
"volumeMounts": [
{
"name": "k8s",
"readOnly": true,
"mountPath": "/etc/kubernetes/"
},
{
"name": "certs",
"mountPath": "/etc/ssl/certs"
},
{
"name": "pki",
"mountPath": "/etc/pki"
}
],
"livenessProbe": {
"httpGet": {
"path": "/healthz",
"port": 8080,
"host": "127.0.0.1"
},
"initialDelaySeconds": 15,
"timeoutSeconds": 15,
"failureThreshold": 8
}
}
],
"hostNetwork": true
},
"status": {}
$ :wq
The following is the truncated response
# pods "kube-apiserver-test2-highr" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `containers[*].image` or `spec.activeDeadlineSeconds`
So I tried a different approach, I edited the deployment file for kube-apiserver with the same change described above
and ran the following:
$ kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.json --namespace=kube-system
And got this response:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
So now i'm stuck, how can I change the range of valid ports?
You are specifying --service-node-port-range=8880-8880 wrong. You set it to one port only, Set it to a range.
Second problem: You are setting the service to use 9090 and it's not in the range.
ports:
- port: 80
targetPort: 9090
nodePort: 9090
API Server should have a deployment too, Try to editing the port-range in the deployment itself and delete the api server pod so it gets recreated via new config.
The Service node ports range is set to infrequently-used ports for a reason. Why do you want to publish this on every node? Do you really want that?
An alternative is to expose it on a semi-random nodeport, then use a proxy pod on a known node or set of nodes to access it via hostport.
This issue:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
was caused by my port range excluding 8080, which kube-apiserver was serving on, so I could not send any updates to kubectl.
I fixed it by changing the port range to 8080-8881 and restarting the kubelet service like so:
$ service kubelet restart
Everything works as expected now.

loadbalancer service won't redirect to desired pod

I'm playing around with kubernetes and I've set up my environment with 4 deployments:
hello: basic "hello world" service
auth: provides authentication and encryption
frontend: an nginx reverse proxy which represents a single-point-of-entry from the outside and routes to the accurate pods internally
nodehello: basic "hello world" service, written in nodejs (this is what I contributed)
For the hello, auth and nodehello deployments I've set up each one internal service.
For the frontend deployment I've set up a load-balancer service which would be exposed to the outside world. It uses a config map nginx-frontend-conf to redirect to the appropriate pods and has the following contents:
upstream hello {
server hello.default.svc.cluster.local;
}
upstream auth {
server auth.default.svc.cluster.local;
}
upstream nodehello {
server nodehello.default.svc.cluster.local;
}
server {
listen 443;
ssl on;
ssl_certificate /etc/tls/cert.pem;
ssl_certificate_key /etc/tls/key.pem;
location / {
proxy_pass http://hello;
}
location /login {
proxy_pass http://auth;
}
location /nodehello {
proxy_pass http://nodehello;
}
}
When calling the frontend endpoint using curl -k https://<frontend-external-ip> I get routed to an available hello pod which is the expected behavior.
When calling https://<frontend-external-ip>/nodehello however I won't get routed to a nodehello pod, but instead to a hellopod again.
I suspect the upstream nodehello configuration to be the failing part. I'm not sure how service discovery works here, i.e. how the dns name nodehello.default.svc.cluster.local would be exposed. I'd appreciate an explanation on how it works and what I did wrong.
yaml files used
deployments/hello.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello
spec:
replicas: 3
template:
metadata:
labels:
app: hello
track: stable
spec:
containers:
- name: hello
image: "udacity/example-hello:1.0.0"
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
resources:
limits:
cpu: 0.2
memory: "10Mi"
livenessProbe:
httpGet:
path: /healthz
port: 81
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /readiness
port: 81
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
deployments/auth.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth
spec:
replicas: 1
template:
metadata:
labels:
app: auth
track: stable
spec:
containers:
- name: auth
image: "udacity/example-auth:1.0.0"
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
resources:
limits:
cpu: 0.2
memory: "10Mi"
livenessProbe:
httpGet:
path: /healthz
port: 81
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /readiness
port: 81
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
deployments/frontend.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
template:
metadata:
labels:
app: frontend
track: stable
spec:
containers:
- name: nginx
image: "nginx:1.9.14"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
volumeMounts:
- name: "nginx-frontend-conf"
mountPath: "/etc/nginx/conf.d"
- name: "tls-certs"
mountPath: "/etc/tls"
volumes:
- name: "tls-certs"
secret:
secretName: "tls-certs"
- name: "nginx-frontend-conf"
configMap:
name: "nginx-frontend-conf"
items:
- key: "frontend.conf"
path: "frontend.conf"
deployments/nodehello.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodehello
spec:
replicas: 1
template:
metadata:
labels:
app: nodehello
track: stable
spec:
containers:
- name: nodehello
image: "thezebra/nodehello:0.0.2"
ports:
- name: http
containerPort: 80
resources:
limits:
cpu: 0.2
memory: "10Mi"
services/hello.yaml
kind: Service
apiVersion: v1
metadata:
name: "hello"
spec:
selector:
app: "hello"
ports:
- protocol: "TCP"
port: 80
targetPort: 80
services/auth.yaml
kind: Service
apiVersion: v1
metadata:
name: "auth"
spec:
selector:
app: "auth"
ports:
- protocol: "TCP"
port: 80
targetPort: 80
services/frontend.yaml
kind: Service
apiVersion: v1
metadata:
name: "frontend"
spec:
selector:
app: "frontend"
ports:
- protocol: "TCP"
port: 443
targetPort: 443
type: LoadBalancer
services/nodehello.yaml
kind: Service
apiVersion: v1
metadata:
name: "nodehello"
spec:
selector:
app: "nodehello"
ports:
- protocol: "TCP"
port: 80
targetPort: 80
This works perfectly :-)
$ curl -s http://frontend/
{"message":"Hello"}
$ curl -s http://frontend/login
authorization failed
$ curl -s http://frontend/nodehello
Hello World!
I suspect you might have updated the nginx-frontend-conf when you added /nodehello but have not restarted nginx. Pods won't pickup changed ConfigMaps automatically. Try:
kubectl delete pod -l app=frontend
Until versioned ConfigMaps happen there isn't a nicer solution.

Resources