Kubernetes - SSL docker Nodejs - node.js

I am new to K8s and this is my first time trying to get to grips with it. I am trying to set up a basic Nodejs Express API using this deployment.yml
kind: Service
apiVersion: v1
metadata:
name: ${GCP_PROJECT_NAME}
spec:
selector:
app: ${GCP_PROJECT_NAME}
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: LoadBalancer
loadBalancerIP: ${STATIC_IP_ADDRESS}
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: ${GCP_PROJECT_NAME}
labels:
app: ${GCP_PROJECT_NAME}
spec:
replicas: 1
selector:
matchLabels:
app: ${GCP_PROJECT_NAME}
template:
metadata:
labels:
app: ${GCP_PROJECT_NAME}
spec:
containers:
- name: ${GCP_PROJECT_NAME}
image: gcr.io/${GCP_PROJECT_ID}/${GCP_PROJECT_NAME}:${CIRCLE_SHA1}
ports:
- name: http
containerPort: 3000
protocol: TCP
env:
- name: MONGO_URL_PROD
value: $MONGO_URL_PROD
Everything works great with this setup and deploys to Kubernetes. When I hit my endpoint i.e. http://123.345.333.123 as expected there is no SSL.
I generated my SSL certificates and tried to follow this tutorial [https://vorozhko.net/kubernetes-sidecar-pattern-nginx-ssl-proxy-for-nodejs] but I wasn't able to. Could anyone point me in the right direction, what am I doing wrong or what am I missing?

You can use nginx ingress controller to handle all your SSL setup and usage. Following is a step by step guide to do so:
https://dgkanatsios.com/2017/07/07/using-ssl-for-a-service-hosted-on-a-kubernetes-cluster/
Hope this helps.

This approach didnt work for me. Ingress was not able to get the Cluster IP, it shows <none>

Related

I am trying to deploy my docker image from ACR to AKS. The pods are getting created properly but getting ERR_CONNECTION_TIMED_OUT through external IP

The same deployment and service yaml files are working properly when I am using a standard image from docker like nginx and set it's containerPort to default port of nginx i.e. 80 but when I am changing it's container port to 8080 then also I am getting the same issue.
My deployment.yaml file -
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-test-deployment
labels:
app: my-test-app
spec:
replicas: 1
selector:
matchLabels:
app: my-test-app
template:
metadata:
labels:
app: my-test-app
spec:
containers:
- name: my-test-container
image: javapoccr.azurecr.io/sushant-saurav/my-nest-app-with-docker
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
imagePullSecrets:
- name: acr-details
My service.yaml -
apiVersion: v1
kind: Service
metadata:
name: my-test-service
labels:
app: my-test-app
spec:
selector:
app: my-test-app
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: 8080
There are two quick things that I would check/verify:
Is the test app configured to listen on 8080? The containerPort/targetPort should match what the app is configured to listen on.
Ensure that you have the most recent image. Without a tag, you are using :latest. But if you update that, the imagePullPolicy will not pull the new image, if it has an older one. I'd recommend changing the imagePullPolicy to Always
-Dave

How to deploy .NET core web and worker projects to Kubernetes in single deployment?

I am relatively new to Docker and Kubernetes technologies. My requirement is to deploy one web and one worker (.Net background service) project in a single deployment.
this is how my deployment.yml file looks like :
apiVersion : apps/v1
kind: Deployment
metadata:
name: worker
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- name: worker
image: xxxxx.azurecr.io/worker:#{Build.BuildId}#
#image: xxxxx.azurecr.io/web
imagePullPolicy: Always
#ports:
#- containerPort: 80
apiVersion : apps/v1
kind: Deployment
metadata:
name: web
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: xxxxx.azurecr.io/web:#{Build.BuildId}#
#image: xxxxx.azurecr.io/web
imagePullPolicy: Always
ports:
- containerPort: 80
this is how my service.yml file looks like :
apiVersion: v1
kind: Service
metadata:
name: worker
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: worker
---
apiVersion: v1
kind: Service
metadata:
name: web
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: web
What I have found is if I keep both in service.yml file then its only deploying one in Kubernetes and if I comment one and execute one by one then its deploying to Kubernetes.
Is there any rule that we can’t have both in single file? Any reason why it’s not working together however working individually?
One more ask is there any way we can look into worker service pod something like taking remote of that and see what exactly going on there....even if it’s a console application then anyway to read what’s its printing on console after deployment.?
This issue was resolved in the comments section and I decided to provide a Community Wiki answer just for better visibility to other community members.
It is possible to group multiple Kubernetes resources in the same file, but it is important to separate them using three dashes (“---”).
It's also worth mentioning that resources will be created in the order they appear in the file.
For more information, see the Organizing resource configurations documentation.
I've created an example to demonstrate how we can create a simple app-1 application (Deployment + Service) using a single manifest file:
$ cat app-1.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: app-1
name: app-1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: app-1
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app-1
name: app-1
spec:
replicas: 1
selector:
matchLabels:
app: app-1
template:
metadata:
labels:
app: app-1
spec:
containers:
- image: nginx
name: nginx
NOTE: Resources are created in the order they appear in the file:
$ kubectl apply -f app-1.yml
service/app-1 created
deployment.apps/app-1 created
$ kubectl get deploy,svc
NAME READY UP-TO-DATE
deployment.apps/app-1 1/1 1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
service/app-1 ClusterIP 10.8.14.179 <none> 80/TCP

ERR_NAME_NOT_RESOLVED:Angular pod not communicating with python backend in Kubernetes

I have deployed angular frontend and python backend in kubernetes via microk8s as separate pods and they are running. I have given backend url as 'http://backend-service.default.svc.cluster.local:30007' in my angular file in order to link frontend with backend. But this is raising ERR_NAME_NOT_RESOLVED. Can someone help me in understanding the issue?
Also, I have a config file which specifies the ip's ports and other configurations in my backend. Do I need to make any changes(value of database host?, flask host?, ports? ) to that file before deploying t to kubernetes?
Shown below is my deployment and service files of angular and backend.
apiVersion: v1
kind: Service
metadata:
name: angular-service
spec:
type: NodePort
selector:
app: angular
ports:
- protocol: TCP
nodePort: 30042
targetPort: 4200
port: 4200
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: angular-deployment
labels:
name: angular
spec:
replicas: 1
selector:
matchLabels:
name: angular
template:
metadata:
labels:
name: angular
spec:
containers:
- name: angular
image: angular:local
ports:
- containerPort: 4200
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
type:ClusterIP
selector:
name: backend
ports:
- protocol: TCP
targetPort: 7000
port: 7000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-deployment
labels:
name: backend
spec:
replicas: 1
selector:
matchLabels:
name: backend
template:
metadata:
labels:
name: backend
spec:
containers:
- name: backend
image: flask:local
ports:
- containerPort: 7000
Is your cluster in a healthy state ? DNS are resolved by object coredns in kube-system namespace.
In a classic way your angular app should show up your API Url in your browser so they must exposed and public. It is not your case and I have huge doubts about this.
Expose us your app architecture?
Moreover if you expose your service though NodePort you must not use it for internal access because you never know the node you will access.
When exose a service your apps need to use the port attribute (not the nodeport) to access pod generated in backend.

NodeJS Service is not reachable on Kubernete (GCP)

I have a true roadblock here and I have not found any solutions so far. Ultimately, my deployed NodeJS + Express server is not reachable when deploying to a Kubernete cluster on GCP. I followed the guide & example, nothing seems to work.
The cluster, node and service are running just fine and don't have any issues. Furthermore, it works just fine locally when running it with Docker.
Here's my Node YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2019-08-06T04:13:29Z
generation: 1
labels:
run: nodejsapp
name: nodejsapp
namespace: default
resourceVersion: "23861"
selfLink: /apis/apps/v1/namespaces/default/deployments/nodejsapp
uid: 8b6b7ac5-b800-11e9-816e-42010a9600de
spec:
progressDeadlineSeconds: 2147483647
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: nodejsapp
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: nodejsapp
spec:
containers:
- image: gcr.io/${project}/nodejsapp:latest
imagePullPolicy: Always
name: nodejsapp
ports:
- containerPort: 5000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: 2019-08-06T04:13:29Z
lastUpdateTime: 2019-08-06T04:13:29Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Service YAML:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2019-08-06T04:13:34Z
labels:
run: nodejsapp
name: nodejsapp
namespace: default
resourceVersion: "25444"
selfLink: /api/v1/namespaces/default/services/nodejsapp
uid: 8ef81536-b800-11e9-816e-42010a9600de
spec:
clusterIP: XXX.XXX.XXX.XXX
externalTrafficPolicy: Cluster
ports:
- nodePort: 32393
port: 80
protocol: TCP
targetPort: 5000
selector:
run: nodejsapp
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: XXX.XXX.XXX.XXX
The NodeJS server is configured to run on Port 5000. I tried doing no port-forwarding as well but not a difference in the result.
Any help is much appreciated.
UPDATE:
I used this guide and followed the instructions: https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
UPDATE 2:
FINALLY - figured it out. I'm not sure why this is not mentioned anywhere but you have to create an Ingress that routes the traffic to the pod accordingly.
Here's the example config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/backends: '{"k8s-be-32064--abfe1f07378017e9":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-nodejsapp--abfe1f07378017e9
ingress.kubernetes.io/target-proxy: k8s-tp-default-nodejsapp--abfe1f07378017e9
ingress.kubernetes.io/url-map: k8s-um-default-nodejsapp--abfe1f07378017e9
creationTimestamp: 2019-08-06T18:59:15Z
generation: 1
name: nodejsapp
namespace: default
resourceVersion: "171168"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/versapay-api
uid: 491cd248-b87c-11e9-816e-42010a9600de
spec:
backend:
serviceName: nodejsapp
servicePort: 80
status:
loadBalancer:
ingress:
- ip: XXX.XXX.XXX
Adding it as an answer as need to include image (But not necessarily an answer):
As shown in the image, besides your backend service, a green tick should be visible
Probable Solution:
In your NodeJsApp, please add the following base URL .i.e.,
When the application is started locally, http://localhost:5000/ should return a 200 status code (With ideally Server is running... or some message)
And also, if path based routing is enabled, another base URL is also required:
http://localhost:5000/<nodeJsAppUrl>/ should also return 200 status code.
Above URLs are required for health check of both LoadBalancer and Backend Service and redeploy the service.
Please let me know if the above solution doesn't fix the said issue.
You need an intermediate service to internally expose your deployment.
Right now, you have a set of pods grouped in a deployment and a load balancer exposed in your cluster but you need to link them with an additional service.
You can try using a NodePort like the following:
apiVersion: v1
kind: Service
metadata:
name: nodejsapp-nodeport
spec:
selector:
run: nodejsapp
ports:
- name: default
protocol: TCP
port: 32393
targetPort: 5000
type: NodePort
This NodePort service is in between your Load Balancer and the pods in your deployment, targeting them in port 5000 and exposing port 32393 (as per your settings in the original question, you can change it).
From here, you can redeploy your Load Balancer to target the previous NodePort. This way, you can reach your NodeJS app via port 80 from your load balancer public address.
apiVersion: v1
kind: Service
metadata:
name: nodejs-lb
spec:
selector:
run: nodejsapp
ports:
- name: default
protocol: TCP
port: 80
targetPort: 32393
type: LoadBalancer
The whole scenario would look like this:
publicy exposed address --> LoadBalancer --> | NodePort --> Deployment --> Pods

Azure Kubernetes Services 502 Bad Gateway

I have AKS cluster up and running and on a heavy user load I get some 502 bad gateway responses. This only happens when the request load is high. I used the Azure DevOps load testing to achieve this behavior. I believe that it has something to do with the Load Balancer timeouts but I am not too sure how to go about debugging this. Perhaps I should be checking logs somehwere? Searching around google tells me that i should be checking nginx logs but not sure where to find those. Sorry I am newbie in kubernettes world.
These are all the pods that are in the cluster. apsever-api-... are my actual apps that serve the request:
The YAML file used to generate this:
# DS for AP
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: apserver-api
spec:
updateStrategy:
type: RollingUpdate
selector:
template:
metadata:
labels:
app: apserver-api
spec:
containers:
- name: apserver-api
image: IMAGE
env:
- name: APP_SVC
value: apserver-api
ports:
- containerPort: 80
imagePullPolicy: IfNotPresent
# Service for AP
kind: Service
apiVersion: v1
metadata:
labels:
app: apserver-api
name: apserver-api
spec:
type: ClusterIP
ports:
- name: http
port: 80
- name: https
port: 443
targetPort: 80
selector:
app: apserver-api
type: "LoadBalancer"
and screenshot of the load test:

Resources