How to deploy a node.js with redis on kubernetes? - node.js

I have a very simple node.js application (HTTP service), which "talks" to redis. I want to create a deployment and run it with minikube.
From my understanding, I need a kubernetes Pod for my app, based on the docker image. Here's my Dockerfile:
FROM node:8.9.1
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["npm", "start"]
I build the docker image with docker build -t my-app .
Next, I created a Pod definition for my app's Pod:
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
So far, so good. But from now on, I have no clear idea how to proceed with redis:
should redis be another Pod, or a Service (in terms of Kubernetes kind)?
How do I reference redis from inside my app? Based on whether redis will be defined as a Pod/Service, how do I obtain a connection URL and port? I read about environment variables being created by Kubernetes, but I am not sure whether these work for Pods or Services.
How do I aggregate both (my app & redis) under single configuration? How do I make sure that redis starts first, then my app (which requires running redis instance), and how do I expose my HTTP endpoints to the "outside world"? I read about Deployments, but I am not sure how to connect these pieces together.
Ideally, I would like to have all configurations inside YAML files, so that at the end of the day the whole infrastructure could be started with a single command.

I think I figured out a solution (using a Deployment and a Service).
For my deployment, I used two containers (webapp + redis) within one Pod, since it doesn't make sense for a webapp to run without active redis instance, and additionally it connects to redis upon application start. I could be wrong in this reasoning, so feel free to correct me if you think otherwise.
Here's my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
spec:
selector:
matchLabels:
app: my-app-deployment
template:
metadata:
labels:
app: my-app-deployment
spec:
containers:
- name: redis
image: redis:latest
ports:
- containerPort: 6379
volumeMounts:
- mountPath: /srv/www
name: redis-storage
- name: my-app
image: my-app:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
volumes:
- name: redis-storage
emptyDir: {}
And here's the Service definition:
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
ports:
- port: 8080
protocol: TCP
type: NodePort
selector:
app: my-app-deployment
I create the deployment with:
kubectl create -f deployment.yaml
Then, I create the service with kubectl create -f service.yaml
I read the IP with minikube ip and extract the port from the output of kubectl describe service my-app-service.

I agree with all of the previous answers. I'm just trying to things more simple by executing a single command.
First, create necessary manifests for redis in a file say redis.yaml and service to expose it outside.
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: node-redis
spec:
ports:
- name: redis
port: 6379
targetPort: 6379
type: NodePort
selector:
app: node-redis
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
selector:
matchLabels:
app: node-redis
replicas: 1
template:
metadata:
labels:
app: node-redis
spec:
containers:
- name: redis
image: redis:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 6379
# data volume where redis writes data
volumeMounts:
- name: data
mountPath: /data
readOnly: false
volumes:
- name: data
persistentVolumeClaim:
claimName: redis-data
---
# data volume
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-data
labels:
app: node-redis
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
Next put manifests for your app in another file say my-app.yaml. Here i put the volume field so that you can use the data that stored by redis.
apiVersion: v1
kind: Pod
metadata:
name: my-app
labels:
app: node-redis
spec:
containers:
- name: my-app
image: my-app:latest
ports:
- containerPort: 8080
# data volume from where my-app read data those are written by redis
volumeMounts:
- name: data
mountPath: /data
readOnly: false
volumes:
- name: data
persistentVolumeClaim:
claimName: redis-data
Now we can use the following bash file my-app.sh.
#!/bin/bash
kubectl create -f redis.yaml
pod_name=$(kubectl get po -l app=node-redis | grep app-with-redis | awk '{print $1}')
# check whether redis server is ready or not
while true; do
pong=$(kubectl exec -it $pod_name -c redis redis-cli ping)
if [[ "$pong" == *"PONG"* ]]; then
echo ok;
break
fi
done
kubectl create -f my-app.yaml
Just run chmod +x my-app.sh; ./my-app.sh to deploy. To get the url run minikube service redis --url. You can similarly get the url for your app. The only thing is you need a nodePort type service for your app to access it from outside of the cluster.
So, everything is in your hand now.

I would run redis in a separate pod (i.e.: so your web app doesn't take down the redis server if itself crashes).
Here is your redis deployment & service:
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
selector:
matchLabels:
app: redis
replicas: 1
template:
metadata:
labels:
app: redis
spec:
volumes:
- name: host-sys
hostPath:
path: /sys
initContainers:
- name: disable-thp
image: redis:4.0-alpine
volumeMounts:
- name: host-sys
mountPath: /host-sys
command: ["sh", "-c", "echo never > /host-sys/kernel/mm/transparent_hugepage/enabled"]
containers:
- name: redis
image: redis:4.0-alpine
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 350m
memory: 1024Mi
ports:
- containerPort: 6379
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: redis
spec:
ports:
- port: 6379
name: redis
selector:
app: redis
Since we've exposed a kubernetes Service you can then access your redis instance by hostname, or it's "service name", which is redis.
You can check out my kubernetes redis repository at https://github.com/mateothegreat/k8-byexamples-redis. You can simply run make install if you want the easier route.
Good luck and if you're still stuck please reach out!

yes you need a separete deployement and service for redis
use kubernetes service discovery , should be built in , KubeDNS , CoreDNS
use readniness and liveness probes
Yes , you can write a single big yaml file to describe all the deployments and services. then:
kubectl apply -f yourfile.yml
or you can place the yaml in separate files and then do the :
kubectl apply -f dir/

I recommend you to read further the k8s docs, but in general re your questions raised above:
Yes another pod (with the relevant configuration) and an additional service depends on your use case, check this great example: https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/
Using services, read more here: https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
There are several ways to manage dependencies - search for deployment dependencies, but in general you can append them in the same file with readiness endpoint and expose using a Service - read more in the link in bullet 2

Related

How can i deploy an app which have made by uisng mern technology on kubernets?

I have mern app which is running on my local host , so how can I deploy this on Kubernetes, which MongoDB URL link is set on backend, and how front end interacts with nodejs api?
i am trying to connect mongodb containers services to node js and nodejs services into react containers.
apiVersion: apps/v1
kind: Deployment
metadata:
name: todo-server-app-deploy
spec:
replicas: 1
selector:
matchLabels:
app: todo-server-app
template:
metadata:
labels:
app: todo-server-app
spec:
containers:
- image: summer07/backend1:1.0
name: container1
ports:
- containerPort: 5000
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: todo-server-app
spec:
selector:
app: backend
ports:
- name: http
port: 80
targetPort: 5000
externalTrafficPolicy: Cluster
this is my backend yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
containers:
- image: mongo:4.0.9-xenial
name: container1
ports:
- containerPort: 27017
command:
- mongod
- "--bind_ip"
- "0.0.0.0"
imagePullPolicy: Always
volumeMounts:
- mountPath: /data/db
name: todo-mongo-vol
volumes:
- name: todo-mongo-vol
persistentVolumeClaim:
claimName: todo-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
app: mongo
spec:
selector:
app: mongo
ports:
- port: 27017
targetPort: 27017
this is mongo yml
i see logs they mongopod is connected with nodejs pod but wen i use minikubeip and node js serive host in brower i am not able to get respone .
i use to connect in mongo with mongo service and port in nodejs backend
If you deploy the application to Kubernetes, so for Database MongoDB you can use the operator to manage the PODs and deployment.
In Kubernetes services internally connect with each other using the Service-name. Service forward traffic to Deployment which is backed by the PODs.
so if you deployed mongoDb database there will be svc
apiVersion: v1
kind: Service
metadata:
labels:
name: mongo
so it will be something like
const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}#mongo:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
same goes from react to node js service, service-name is used internally as DNS,
curl http://node-service:8080
you can pass accordingly in your code node service as Host.

Kubernetes Crashloopbackoff With Minikube

So I am learning about Kubernetes with a guide, I am trying to deploy a MongoDB Pod with 1 replica. This is the deployment config file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
I also try to deploy a Mongo-Express Pod with almost the same config file, but I keep getting CrashLoopBackOff for both Pods, From the little understanding I have, this is caused by the container failing and restarting in a cycle. I tried going through the events with kubectl get events and I see that a warning with message Back-off restarting failed container keeps occurring. I also tried doing a little digging around and came across a solution that says to add
command: ['sleep']
args: ['infinity']
That fixed the CrashLoopBackOff issue, but when I try to get the logs for the Pod, nothing is displayed on the terminal. Please I need some help and possible explanation as how the command and args seem to fix it, also how do I stop this crash from happening to my Pods and current one, Thank you very much.
My advice is to deploy MongoDB as StatefulSet on Kubernetes.
In stateful application, the N-replicas of master nodes manages several worker nodes under a cluster. So, if any master node goes down the other ordinal instances will be active to execute the workflow. The master node instances must be identified as a unique ordinal number known as StatefulSet.
See more: mongodb-sts, mongodb-on-kubernetes.
Also use Headless service to manage the domain of a Pod. In general understanding of Headless Service, there is no need for LoadBalancer or a kube-proxy to interact directly with Pods but using a Service IP, so the Cluster IP is set to none.
In your case:
apiVersion: v1
kind: Service
metadata:
name: mongodb
spec:
clusterIP: None
selector:
app: mongodb
ports:
- port: 27017
The error:
Also uncaught exception: Error: couldn't add user: Error preflighting normalization: U_STRINGPREP_PROHIBITED_ERROR _getErrorWithCode#src/mongo/shell/utils.js:25:13
indicates that the secret may be missing. Take a look: mongodb-initializating.
In your case secret should look similar:
apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
type: Opaque
data:
mongo-root-username: YWRtaW4=
mongo-root-password: MWYyZDFlMmU2N2Rm
Remember to configure also a volume for your pods - follow tutorials I have linked above.
Deploy mongodb with StatefulSet not as deployment.
Example:
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
labels:
name: mongo
spec:
ports:
port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongod
spec:
serviceName: mongodb-service
replicas: 3
template:
metadata:
labels:
role: mongo
environment: test
replicaset: MainRepSet
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: replicaset
operator: In
values:
- MainRepSet
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
volumes:
- name: secrets-volume
secret:
secretName: shared-bootstrap-data
defaultMode: 256
containers:
- name: mongod-container
#image: pkdone/mongo-ent:3.4
image: mongo
command:
- "numactl"
- "--interleave=all"
- "mongod"
- "--wiredTigerCacheSizeGB"
- "0.1"
- "--bind_ip"
- "0.0.0.0"
- "--replSet"
- "MainRepSet"
- "--auth"
- "--clusterAuthMode"
- "keyFile"
- "--keyFile"
- "/etc/secrets-volume/internal-auth-mongodb-keyfile"
- "--setParameter"
- "authenticationMechanisms=SCRAM-SHA-1"
resources:
requests:
cpu: 0.2
memory: 200Mi
ports:
- containerPort: 27017
volumeMounts:
- name: secrets-volume
readOnly: true
mountPath: /etc/secrets-volume
- name: mongodb-persistent-storage-claim
mountPath: /data/db
volumeClaimTemplates:
metadata:
name: mongodb-persistent-storage-claim
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi

Azure kubernetes service loadbalancer external IP not accessible

I am new to the world of Kubernetes and was testing a sample Django "Hello world" app deployment. Using docker-compose I was able to access the hell world page on a browser but I need to use Kubernetes. So I tested two options and none of them worked.
1) I created an Azure CICD pipeline to build and push the image in ACR using the following Dockerfile,
FROM python:3.8
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN mkdir /hello_world
WORKDIR /hello_world
COPY . /hello_world/
RUN pip install -r requirements.txt
CMD [ "python", "manage.py", "runserver", "0.0.0.0:8000" ]
The pipeline completes successfully and uploads the image in the repository.
Now I use kubectl to deploy using the deployment file,
apiVersion: apps/v1
kind: Deployment
metadata:
name: django-helloworld
spec:
replicas: 3
selector:
matchLabels:
app: django-helloworld
template:
metadata:
labels:
app: django-helloworld
spec:
containers:
- name: django-helloworld
image: acrshgpdev1.azurecr.io/django-helloworld:194
#imagePullPolicy: Always
ports:
- containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: django-helloworld-service
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: django-helloworld
The deployment and service are created but when I try to access the external IP of the LB service through a browser the page is inaccessible. I used the external ip:port and it didn't work.
Any thoughts why would this be happening?
2) I used the same Dockerfile but a different deployment file(changed the image to the locally created image & removed LB service) to deploy the app to my local Kubernetes. the deployment file was as follows,
apiVersion: v1
kind: Service
metadata:
name: django-helloworld-service
spec:
selector:
app: django-helloworld
ports:
- protocol: TCP
port: 80
targetPort: 30800
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: django-helloworld
spec:
replicas: 3
selector:
matchLabels:
app: django-helloworld
template:
metadata:
labels:
app: django-helloworld
spec:
containers:
- name: django-helloworld
image: django-helloworld:1.0
#imagePullPolicy: Always
ports:
- containerPort: 8000
It creates the deployment and service but doesn't assign an external IP to the NodePort service so I am not able to figure out what service should I choose to test the app is successful. I know I can't choose a LB as it doesn't go locally and I need to deploy using a cloud service.
just configure your service to be of type LoadBalancer and do a proper port mapping:
apiVersion: v1
kind: Service
metadata:
name: django-helloworld-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8000
selector:
app: django-helloworld
https://kubernetes.io/docs/concepts/services-networking/service/
Make sure the deployment has associated healthy pods too (they show as Running and with 1/1 next to their name). If there aren't, make sure your cluster can successfully pull from acrshgpdev1.azurecr.io registry; you can integrate directly an AKS cluster with an ACR registry following this article:
az aks update -n myAKSCluster -g myResourceGroup --attach-acr acrshgpdev1.azurecr.io
or by adding the SP of the AKS cluster manually to the Reader role on the ACR.

Setting up a React application and NodeJS backend in Kubernetes?

I am trying to setup a sample React application wired to a NodeJS backend as two pods in Kubernetes. This is the (mostly) the default CRA and NodeJS application with Express i.e. npx create-react-app my_app.
Both application runs fine locally through yarn start and npm app.js respectively. The React application uses a proxy defined in package.json to communicate with the NodeJS back-end.
React package.json
...
"proxy": "http://localhost:3001/"
...
React Dockerfile
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN yarn
COPY . .
CMD [ "yarn", "start" ]
NodeJS Dockerfile
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3001
CMD [ "node", "app.js" ]
ui-deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sample-ui
namespace: my_namespace
spec:
replicas: 1
selector:
matchLabels:
app: my_namespace
component: sample-ui
template:
metadata:
labels:
app: my_namespace
component: sample-ui
spec:
containers:
-
name: sample-ui
image: xxx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
name: http
protocol: TCP
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
server-deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sample-server
namespace: my_namespace
spec:
replicas: 1
selector:
matchLabels:
app: my_namespace
component: sample-server
template:
metadata:
labels:
app: my_namespace
component: sample-server
spec:
containers:
-
name: sample-server
image: xxx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3001
name: http
protocol: TCP
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
ui-service
apiVersion: v1
kind: Service
metadata:
name: sample-ui
namespace: my_namespace
labels: {app: sample-ui}
spec:
type: LoadBalancer
selector:
component: sample-ui
ports:
- name: listen
protocol: TCP
port: 3000
server-service
apiVersion: v1
kind: Service
metadata:
name: sample-server
namespace: my_namespace
labels: {app: sample-server}
spec:
selector:
component: sample-server
ports:
- name: listen
protocol: TCP
port: 3001
Both services run fine on my system.
get svc
sample-server ClusterIP 10.19.255.171 <none> 3001/TCP 26m
sample-ui LoadBalancer 10.19.242.42 34.82.235.125 3000:31074/TCP 26m
However, my deployment for the CRA crashes multiple time despite indicating it is still running.
get pods
sample-server-598776c5fc-55jsz 1/1 Running 0 42m
sample-ui-c75ccb746-qppk2 1/1 Running 4 2m38s
I suspect that my React Dockerfile is improperly configured but I'm not sure how to write it to work with a NodeJS backend in kubernetes.
a) How can I setup my Dockerfile for my CRA such that it will run in a pod?
b) How can I setup my docker services and pods such that they communicate?
You will have to use come API gateway in front of your server or you can use ambassador from kubernetes.
Then you can get your client connected to server.
a) How can I setup my Dockerfile for my CRA such that it will run in a
pod?
React docker file is looking good you need to check why container of pod is failing.
Using kubectl describe pod <POD name> or debug more logs using the command kubectl logs <pod name>
How can I setup my docker services and pods such that they
communicate?
For this, you are on right track, how server and frontend will communicate in Kubernetes using the service name.
This might weird at first level but Kubernetes DNS takes care of it.
How if you have two service frontend (sample-ui) and backend (sample-server)
sample-ui will send the request to sample-server so they get connected that way.
You can also try this by going inside the sample-ui POD(container)
kubect exec -it sample-ui-c75ccb746-qppk2 -- /bin/bash
now you are inside of sample-ui container let's send request to sample-server from here
if curl not exist you can install it using the apk install curl or apt-get install curl or yum install curl
curl http://sample-server:3001
Magic you might see response from server.
So your while flow goes like
user coming to frontend load balancer service > calling sample-ui service > internally inside kubernetes cluster now your sample-ui calling the sample-server
All the service that you create inside the K8s will be accesible by it's name.

Getting a validtion error when trying to apply a Yaml file in AKS

I'm following along with this tutorial. I'm at the stage where I deploy using the command:
kubectl apply -f azure-vote-all-in-one-redis.yaml
The YAML file looks like this:
version: '3'
services:
azure-vote-back:
image: redis
container_name: azure-vote-back
ports:
- "6379:6379"
azure-vote-front:
build: ./azure-vote
image: azure-vote-front
container_name: azure-vote-front
environment:
REDIS: azure-vote-back
ports:
- "8080:80"
However, I'm getting the error:
error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
If I add an apiVersion and a Kind, like this:
apiVersion: v1
kind: Pod
Then I get the error:
error validating data: ValidationError(Pod): unknown field "services" in io.k8s.api.core.v1.Pod
Am I missing something here?
It looks like you're trying to apply a Docker Swarm/Compose YAML file to your Kubernetes cluster. This will not work directly without a conversion.
Using a tool like Kompose to convert your Docker YAML into k8s YAML is a useful step into migrating from one to the other.
For more information see https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/
so first of all, every yaml definition should follow AKMS spec: apiVersion, kind, metadata, spec. Also, you should avoid pod and use deployments. Because deployments handle pods on their own.
Here's a sample vote-back\front definition:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
template:
metadata:
labels:
app: azure-vote-back
spec:
containers:
- name: azure-vote-back
image: redis
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 3
strategy:
rollingUpdate:
maxSurge: 60%
maxUnavailable: 60%
template:
metadata:
labels:
app: azure-vote-front
spec:
containers:
- name: azure-vote-front
image: aksrg.azurecr.io/azure-vote-front:voting-dev
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
- name: MY_POD_NAMESPACE
valueFrom: {fieldRef: {fieldPath: metadata.namespace}}
imagePullSecrets:
- name: k8s
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
In my case, I am deploying my project on GKE via Travis. In my travis file, I am calling a shell file (deploy.sh).
In the deploy.sh file, I have written all the steps to create kubernetes resources:
### Deploy
# Apply k8s config
kubectl apply -f .
So here, I replaced kubectl apply -f . with the individual file names as follows:
### Deploy
# Apply k8s config
kubectl apply -f namespace.yaml
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml
And then, the error is fixed!

Resources