Kubernetes - Ingress / Service / LB - node.js

I am new to K8s and this is my first time trying to get to grips with it. I am trying to set up a basic Nodejs Express API using this deployment.yml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api
spec:
replicas: 1
template:
metadata:
labels:
app: api
spec:
containers:
- image: registry.gitlab.com/<project>/<app>:<TAG>
imagePullPolicy: Always
name: api
env:
- name: PORT
value: "8080"
ports:
- containerPort: 8080
hostPort: 80
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 1
imagePullSecrets:
- name: registry.gitlab.com
Which is being deployed via gitlab-ci. This is working and I have set up a service to expose it:
apiVersion: v1
kind: Service
metadata:
name: api-svc
labels:
app: api-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: api
type: LoadBalancer
But I have been looking into ingress to have a single point of entry for possibly multiple services. I have been reading through Kubernetes guides and I read through this Kubernetes Ingress Example and this is the ingress.yml I created:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
backend:
serviceName: api-svc
servicePort: 80
But this did not work, when I visited the external IP address that was generated from the ingress and I just 502 error pages.
Could anyone point me in the right direction, what am I doing wrong or what am I missing? I see that in the example link above that there is an nginx-rc.yml which I deployed exactly like in the example and that was created but still got nothing from the endpoint. The API was accessible from the Service external IP though..
Many Thanks

I have looked into it again and think I figured it out.
In order for Ingress to work on GCE you need to define your backend service das a NodePort not as ClusterIP or LoadBalancer.
Also you need to make sure the http health check to / works (you'll see the Google L7 Loadbalancer hitting your service quite a lot on that url) and then it's available.

Thought I would post my working deployment/service/ingress
So after much effort in getting this working, here is what I used to get it working:
Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: backend-api-v2
spec:
replicas: 2
template:
metadata:
labels:
app: backend-api-v2
spec:
containers:
- image: registry.gitlab.com/<project>/<app>:<TAG>
imagePullPolicy: Always
name: backend-api-v2
env:
- name: PORT
value: "8080"
ports:
- containerPort: 8080
livenessProbe:
httpGet:
# Path to probe; should be cheap, but representative of typical behavior
path: /healthz
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 5
imagePullSecrets:
- name: registry.gitlab.com
Service
apiVersion: v1
kind: Service
metadata:
name: api-svc-v2
labels:
app: api-svc-v2
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 31810
protocol: TCP
name: http
selector:
app: backend-api-v2
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
spec:
rules:
- host: api.foo.com
http:
paths:
- path: /v1/*
backend:
serviceName: api-svc
servicePort: 80
- path: /v2/*
backend:
serviceName: api-svc-v2
servicePort: 80
The important bits to notice as #Tigraine pointed out is the service is using type: NodePort and not LoadBalancer, I have also defined a nodePort but I believe it will create one if you leave it out.
It will use the default-http-backend for any routes that don't match the rules this is a default container that GKE runs in the kube-system namespace. So if I visited http://api.foo.com/bob I get the default response of default backend - 404.
Hope this helps

Looks like you're exposing your service to port 80 but your container is exposing 8080 so any request to the service is going to fail.
Also, have a look at the sample ingress resource (https://github.com/nginxinc/kubernetes-ingress/blob/master/examples/complete-example/cafe-ingress.yaml), you need to also define which hosts / paths route when the ingress controller is hit. (i.e. example.foo.com --> api-svc)

Related

Azure Kubernetes Ingress : 502 Bad Gateway while using Path inside ingress configuration

I am facing the 502 Bad gateway issue in my Application Gateway.
I am using Azure Kubernetes Service to deploy my cluster which is connected to Ingress Application Gateway.
Configuration Files:
kube-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myApp
namespace: en02
labels:
app: myApp
spec:
selector:
matchLabels:
app: myApp
replicas: 1
template:
metadata:
labels:
app: myApp
spec:
containers:
- name: myApp
image: somecr.azurecr.io/myApp:1.0.0.30
resources:
limits:
memory: "64Mi"
cpu: "100m"
ports:
- containerPort: 5100
env:
- name: ASPNETCORE_HOSTINGSTARTUPASSEMBLIES
value: "Microsoft.AspNetCore.ApplicationInsights.HostingStartup"
- name: "ApplicationInsights__ConnectionString"
value: "myKey"
---
apiVersion: v1
kind: Service
metadata:
namespace: en02
name: myApp
spec:
selector:
app: myApp
ports:
- port: 30153
targetPort: 5100
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: en02
name: etopia
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/health-probe-path: "/api/home"
spec:
rules:
- http:
paths:
- path: /myApp/
backend:
service:
name: myApp
port:
number: 30153
pathType: Exact
Result of
kubectl describe ingress -n en02
Name: ingress
Labels: <none>
Namespace: en02
Address: public-ip
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/myApp/ myApp:30153 (10.0.0.106:5100)
Annotations: appgw.ingress.kubernetes.io/health-probe-path: /api/home
kubernetes.io/ingress.class: azure/application-gateway
Events: <none>
I am getting expected results from 10.0.0.106:5100/api/home and Application Gateway health status is 200.
No matter what I do, I always get Bad Gateway error, I was able to access a sample app on port 80 (where the ingress path was /) but if I specify anything in ingress path (/cashify/) it always give me bad gateway.
I tried adding readinessProbe to container but it doesn't work (However I am already getting 200 under application gateway health status).
Please help.
Please check if below can be worked around.
Please try to update deployment yaml to use wild card path specification to access apis with different paths.
deployment.yml
apiVersion: extensions/v1beta1
kind: Ingress
....
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- host: xxx
http:
paths:
- path: /api/* #wild card path
backend:
serviceName: apiservice
servicePort: 80
- backend:
.....
servicePort: 80
Note from MS docs: If you want Application Gateway to probe on a different protocol, host name, or path and to recognize a
different status code as Healthy, configure a custom probe and
associate it with the HTTP settings.
As you said you have defined readinessProbe , please check if path of those probes is correct.
Check same with 1. livenessProbe 2. readinessProbe
Also please note that readinessProbe and livenessProbe are supported when configured with httpGet.
References:
bad request - path based routing · kubernetes-ingress · GitHub
application-gateway-troubleshooting-502

NodeJS Service is not reachable on Kubernete (GCP)

I have a true roadblock here and I have not found any solutions so far. Ultimately, my deployed NodeJS + Express server is not reachable when deploying to a Kubernete cluster on GCP. I followed the guide & example, nothing seems to work.
The cluster, node and service are running just fine and don't have any issues. Furthermore, it works just fine locally when running it with Docker.
Here's my Node YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2019-08-06T04:13:29Z
generation: 1
labels:
run: nodejsapp
name: nodejsapp
namespace: default
resourceVersion: "23861"
selfLink: /apis/apps/v1/namespaces/default/deployments/nodejsapp
uid: 8b6b7ac5-b800-11e9-816e-42010a9600de
spec:
progressDeadlineSeconds: 2147483647
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: nodejsapp
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: nodejsapp
spec:
containers:
- image: gcr.io/${project}/nodejsapp:latest
imagePullPolicy: Always
name: nodejsapp
ports:
- containerPort: 5000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: 2019-08-06T04:13:29Z
lastUpdateTime: 2019-08-06T04:13:29Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Service YAML:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2019-08-06T04:13:34Z
labels:
run: nodejsapp
name: nodejsapp
namespace: default
resourceVersion: "25444"
selfLink: /api/v1/namespaces/default/services/nodejsapp
uid: 8ef81536-b800-11e9-816e-42010a9600de
spec:
clusterIP: XXX.XXX.XXX.XXX
externalTrafficPolicy: Cluster
ports:
- nodePort: 32393
port: 80
protocol: TCP
targetPort: 5000
selector:
run: nodejsapp
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: XXX.XXX.XXX.XXX
The NodeJS server is configured to run on Port 5000. I tried doing no port-forwarding as well but not a difference in the result.
Any help is much appreciated.
UPDATE:
I used this guide and followed the instructions: https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
UPDATE 2:
FINALLY - figured it out. I'm not sure why this is not mentioned anywhere but you have to create an Ingress that routes the traffic to the pod accordingly.
Here's the example config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/backends: '{"k8s-be-32064--abfe1f07378017e9":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-nodejsapp--abfe1f07378017e9
ingress.kubernetes.io/target-proxy: k8s-tp-default-nodejsapp--abfe1f07378017e9
ingress.kubernetes.io/url-map: k8s-um-default-nodejsapp--abfe1f07378017e9
creationTimestamp: 2019-08-06T18:59:15Z
generation: 1
name: nodejsapp
namespace: default
resourceVersion: "171168"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/versapay-api
uid: 491cd248-b87c-11e9-816e-42010a9600de
spec:
backend:
serviceName: nodejsapp
servicePort: 80
status:
loadBalancer:
ingress:
- ip: XXX.XXX.XXX
Adding it as an answer as need to include image (But not necessarily an answer):
As shown in the image, besides your backend service, a green tick should be visible
Probable Solution:
In your NodeJsApp, please add the following base URL .i.e.,
When the application is started locally, http://localhost:5000/ should return a 200 status code (With ideally Server is running... or some message)
And also, if path based routing is enabled, another base URL is also required:
http://localhost:5000/<nodeJsAppUrl>/ should also return 200 status code.
Above URLs are required for health check of both LoadBalancer and Backend Service and redeploy the service.
Please let me know if the above solution doesn't fix the said issue.
You need an intermediate service to internally expose your deployment.
Right now, you have a set of pods grouped in a deployment and a load balancer exposed in your cluster but you need to link them with an additional service.
You can try using a NodePort like the following:
apiVersion: v1
kind: Service
metadata:
name: nodejsapp-nodeport
spec:
selector:
run: nodejsapp
ports:
- name: default
protocol: TCP
port: 32393
targetPort: 5000
type: NodePort
This NodePort service is in between your Load Balancer and the pods in your deployment, targeting them in port 5000 and exposing port 32393 (as per your settings in the original question, you can change it).
From here, you can redeploy your Load Balancer to target the previous NodePort. This way, you can reach your NodeJS app via port 80 from your load balancer public address.
apiVersion: v1
kind: Service
metadata:
name: nodejs-lb
spec:
selector:
run: nodejsapp
ports:
- name: default
protocol: TCP
port: 80
targetPort: 32393
type: LoadBalancer
The whole scenario would look like this:
publicy exposed address --> LoadBalancer --> | NodePort --> Deployment --> Pods

Azure Kubernetes Services 502 Bad Gateway

I have AKS cluster up and running and on a heavy user load I get some 502 bad gateway responses. This only happens when the request load is high. I used the Azure DevOps load testing to achieve this behavior. I believe that it has something to do with the Load Balancer timeouts but I am not too sure how to go about debugging this. Perhaps I should be checking logs somehwere? Searching around google tells me that i should be checking nginx logs but not sure where to find those. Sorry I am newbie in kubernettes world.
These are all the pods that are in the cluster. apsever-api-... are my actual apps that serve the request:
The YAML file used to generate this:
# DS for AP
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: apserver-api
spec:
updateStrategy:
type: RollingUpdate
selector:
template:
metadata:
labels:
app: apserver-api
spec:
containers:
- name: apserver-api
image: IMAGE
env:
- name: APP_SVC
value: apserver-api
ports:
- containerPort: 80
imagePullPolicy: IfNotPresent
# Service for AP
kind: Service
apiVersion: v1
metadata:
labels:
app: apserver-api
name: apserver-api
spec:
type: ClusterIP
ports:
- name: http
port: 80
- name: https
port: 443
targetPort: 80
selector:
app: apserver-api
type: "LoadBalancer"
and screenshot of the load test:

kubernetes service discovery with specific endpoint (DNS)

I am using AKS cluster on Azure. I am trying to discover service using DNS (http://my-api.default.svc.cluster.local:3000/) but, it's not working (This site can’t be reached). With service IP endpoint everything is working fine.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-api
labels:
app: my-api
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: my-api
image: test.azurecr.io/my-api:latest
ports:
- containerPort: 3000
imagePullSecrets:
- name: testsecret
---
apiVersion: v1
kind: Service
metadata:
name: my-api
spec:
selector:
app: my-api
ports:
- protocol: TCP
port: 3000
targetPort: 3000
kubectl describe services kube-dns --namespace kube-system
Name: kube-dns
Namespace: kube-system
Labels: addonmanager.kubernetes.io/mode=Reconcile
k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kube-dns","kubernet...
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.10.110.110
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.10.100.54:53,10.10.100.64:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.10.100.54:53,10.10.100.64:53
Session Affinity: None
Events: <none>
kubectl describe svc my-api
Name: my-api
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-api","namespace":"default"},"spec":{"ports":[{"port":3000,"protocol":...
Selector: app=my-api
Type: ClusterIP
IP: 10.10.110.104
Port: <unset> 3000/TCP
TargetPort: 3000/TCP
Endpoints: 10.10.100.42:3000
Session Affinity: None
Events: <none>
From Second POD
kubectl exec -it second-pod /bin/bash
curl my-api.default.svc.cluster.local:3000
Response: {"value":"Hello world2"}
From Second POD website is running which is using the same endpoint but it's not connecting to the service.
Fixing the indentation of your yaml file, I was able to launch the deployment and service successfully. Also the DNS resolution worked fine.
Differences:
Fixed indentation
Used test1 namespaces instead of default
Used containerPort 80 instead of 3000
Used my image
Deployment:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
app: my-api
name: my-api
namespace: test1
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- image: leodotcloud/swiss-army-knife
name: my-api
ports:
- containerPort: 80
protocol: TCP
Service:
apiVersion: v1
kind: Service
metadata:
name: my-api
namespace: test1
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 80
selector:
app: my-api
type: ClusterIP
Debugging steps:
Install tcpdump inside both of the kube-dns containers and start capturing DNS traffic (with filters from the second pod IP)
From inside the second pod, run curl or dig command using the FQDN.
Check if the DNS query packets are reaching the kube-dns containers.
If not, check for networking issues.
If the DNS resolution is working, then start tcpdump inside your application container and check if the curl packet is reaching the container.
Check the source and destination IP address of the packets.
Check the iptables rules on the hosts.
Check sysctl settings.
If you use Deployment to deploy your application onto cluster where it will be consumed via a Service you should have no need at all to manually set Endpoints. Just rely on kubernetes and define normal selector in your Service object.
Other then that, when it makes sense (external service consumed from within cluster), you need to make sure your Endpoints ports definition fully matches the one on service (incl. protocol and potentially name). This incomplete matching is a most common reason for endpoints to be not visible as a part of service.
From the above discussion, what I understood is, you want to expose a service but not using the IP address.
Service can be exposed in many ways. you should look for Service type LoadBalancer.
Try modifying your service is follow :
apiVersion: v1
kind: Service
metadata:
name: my-api
spec:
type: LoadBalancer
selector:
app: my-api
ports:
- protocol: TCP
port: 3000
targetPort: 3000
This will create a loadbalancer and map your service to the same.
Later you can add this loadbalancer to your DNS mapping service provided by Azure to give the domain name you like. ex: http:\\my-api.example.com:3000
Also I would like to add, if you define your ports as follow :
ports:
- name: http
port: 80
targetPort: 3000
This will redirect traffic coming to port 80 to 3000 and your service call would look much cleaner for ex. http:\\my-api.example.com

Pods do not resolve the domain names of a service through ingress

I have a problem that my pods in minikube cluster are not able to see the service through the domain name.
to run my minikube i use the following commands (running on windows 10):
minikube start --vm-driver hyperv;
minikube addons enable kube-dns;
minikube addons enable ingress;
This is my deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: hello-world
name: hello-world
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: hello-world
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello-world
spec:
containers:
- image: karthequian/helloworld:latest
imagePullPolicy: Always
name: hello-world
ports:
- containerPort: 80
protocol: TCP
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
this is the service.yaml:
apiVersion: v1
kind: Service
metadata:
labels:
run: hello-world
name: hello-world
namespace: default
selfLink: /api/v1/namespaces/default/services/hello-world
spec:
ports:
- nodePort: 31595
port: 80
protocol: TCP
targetPort: 80
selector:
run: hello-world
sessionAffinity: None
type: ExternalName
externalName: minikube.local.com
status:
loadBalancer: {}
this is my ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: minikube-local-ingress
spec:
rules:
- host: minikube.local.com
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: 80
So, if i go inside the hello-world pod and from /bin/bash will run curl minikube.local.com or nslookup minikube.local.com.
So how can i make sure that the pods can resolve the DNS name of the service?
I know i can specify hostAlias in the deployment definition, but is there an automatic way tht will allow to update the DNS of kubernetes?
So, you want to expose your app on Minikube? I've just tried it using the default ClusterIP service type (essentially, removing the ExternalName stuff you had) and with this YAML file I can see your service on https://192.168.99.100 where the Ingress controller lives:
The service now looks like so:
apiVersion: v1
kind: Service
metadata:
labels:
run: hello-world
name: hello-world
spec:
ports:
- port: 80
targetPort: 80
selector:
run: hello-world
And the ingress is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: minikube-local-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: 80
Note: Within the cluster your service is now available via hello-world.default (that's the DNS name assigned by Kubernetes within the cluster) and from the outside you'd need to map, say hello-world.local to 192.168.99.100 in your /etc/hosts file on your host machine.
Alternatively, if you change the Ingress resource to - host: hello-world.local then you can (from the host) reach your service using this FQDN like so: curl -H "Host: hello-world.local" 192.168.99.100.

Resources