applying changes to pod code source realtime - npm - node.js

I have reactjs app running on my pod and I have mounted source code from the host machine to the pod. It works fine but when I change my code in the host machine, pod source code also changes but when I run the site it has not affected the application. here is my manifest, what I'm doing wrong?
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 1
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: webapp
tier: frontend
phase: development
spec:
containers:
- name: webapp
image: xxxxxx
command:
- npm
args:
- run
- dev
env:
- name: environment
value: dev
- name: AUTHOR
value: webapp
ports:
- containerPort: 3000
volumeMounts:
- mountPath: /code
name: code
imagePullSecrets:
- name: regcred
volumes:
- name: code
hostPath:
path: /hosthome/xxxx/development/react-app/src
and i know for a fact npm is not watching my changes, how can i resolve it in pods?

Basically, you need to reload your application everytime you change your code and your pods don't reload or restart when you change the code under the /code directory. You will have to re-create your pod since you are using a deployment you can either:
kubectl delete <pod-where-your-app-is-running>
or
export PATCH='{"spec":{"template":{"metadata":{"annotations":{"timestamp":"'$(date)'"}}}}}'
kubectl patch deployment webapp -p "$PATCH"
Your pods should restart after that.

what Rico has mentioned is correct, you need to patch or rebuild with every changes, but you can avoid that by running minikube without vm-driver here is the command to run minikube without vm-driver only works in Linux, by doing this you can mount host path to pod. hope this will help
sudo minikube start --bootstrapper=localkube --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost -v=1

Related

Running Testcafe in Docker Containers in Kubernetes - 1337 Port is Already in Use - Error

I have multiple Testcafe scripts (script1.js, script2.js) that are working fine. I have Dockerized this code into a Dockerfile and it works fine when I run the Docker Image. Next, I want to invoke this Docker Image as a CronJob in Kubernetes. Given below is my manifest.yaml file.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: application-automation-framework
namespace: development
labels:
team: development
spec:
schedule: "*/1 * * * *"
jobTemplate:
metadata:
labels:
team: development
spec:
ttlSecondsAfterFinished: 120
backoffLimit: 3
template:
metadata:
labels:
team: development
spec:
containers:
- name: script1-job
image: testcafe-minikube
imagePullPolicy: Never
args: ["chromium:headless", "script1.js"]
- name: script2-job
image: testcafe-minikube
imagePullPolicy: Never
args: [ "chromium:headless", "script2.js"]
restartPolicy: OnFailure
As seen above, this manifest has two containers running. When I apply this manifest to Kubernetes, the first container (script1-job), runs well. But the second container (script2-job) gives me the following error.
ERROR The specified 1337 port is already in use by another program.
If I run this with one container, it works perfectly. I also tried changing the args of the containers to the following.
args: ["chromium:headless", "script1.js", "--ports 12345,12346"]
args: ["chromium:headless", "script2.js", "--ports 1234,1235"]
Still, I get the same error saying 1337 port already in use. (I wonder whether the --ports argument is working at all in Docker).
This is my Dockerfile for reference.
FROM testcafe/testcafe
COPY . ./
USER root
RUN npm install
Could someone please help me with this? I want to run multiple containers as Cronjobs in Kubernetes, where I can run multiple Testcafe scripts in each job invocation?
adding the containerPort configuration to your kubernetes resource should do the trick.
for example:
spec:
containers:
- name: script1-job
image: testcafe-minikube
imagePullPolicy: Never
args: ["chromium:headless", "script1.js", "--ports 12345,12346"]
ports:
- containerPort: 12346

Kubernetes - "Mount Volume Failed" when trying to deploy

I deployed my first container, I got info:
deployment.apps/frontarena-ads-deployment created
but then I saw my container creation is stuck in Waiting status.
Then I saw the logs using kubectl describe pod frontarena-ads-deployment-5b475667dd-gzmlp and saw MountVolume error which I cannot figure out why it is thrown:
Warning FailedMount 9m24s kubelet MountVolume.SetUp
failed for volume "ads-filesharevolume" : mount failed: exit status 32 Mounting command:
systemd-run Mounting arguments: --description=Kubernetes transient
mount for
/var/lib/kubelet/pods/85aa3bfa-341a-4da1-b3de-fb1979420028/volumes/kubernetes.io~azure-file/ads-filesharevolume
--scope -- mount -t cifs -o username=frontarenastorage,password=mypassword,file_mode=0777,dir_mode=0777,vers=3.0
//frontarenastorage.file.core.windows.net/azurecontainershare
/var/lib/kubelet/pods/85aa3bfa-341a-4da1-b3de-fb1979420028/volumes/kubernetes.io~azure-file/ads-filesharevolume
Output: Running scope as unit
run-rf54d5b5f84854777956ae0e25810bb94.scope. mount error(115):
Operation now in progress Refer to the mount.cifs(8) manual page (e.g.
man mount.cifs)
Before I run the deployment I created a secret in Azure, using the already created azure file share, which I referenced within the YAML.
$AKS_PERS_STORAGE_ACCOUNT_NAME="frontarenastorage"
$STORAGE_KEY="mypassword"
kubectl create secret generic fa-fileshare-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$STORAGE_KEY
In that file share I have folders and files which I need to mount and I reference azurecontainershare in YAML:
My YAML looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontarena-ads-deployment
labels:
app: frontarena-ads-deployment
spec:
replicas: 1
template:
metadata:
name: frontarena-ads-aks-test
labels:
app: frontarena-ads-aks-test
spec:
containers:
- name: frontarena-ads-aks-test
image: faselect-docker.dev/frontarena/ads:test1
imagePullPolicy: Always
ports:
- containerPort: 9000
volumeMounts:
- name: ads-filesharevolume
mountPath: /opt/front/arena/host
volumes:
- name: ads-filesharevolume
azureFile:
secretName: fa-fileshare-secret
shareName: azurecontainershare
readOnly: false
imagePullSecrets:
- name: fa-repo-secret
selector:
matchLabels:
app: frontarena-ads-aks-test
The Issue was because of the different Azure Regions in which AKS cluster and Azure File Share are deployed. If they are in the same Region you would not have this issue.

Is there a way to install a kernel module on an AKS node in Azure Kubernetes?

I am trying to install the nfs-kernel-server package on all nodes in my AKS cluster. The kernel module for NFS is not installed by default in AKS Ubuntu 16.04. I am following the guide here: https://medium.com/#patnaikshekhar/initialize-your-aks-nodes-with-daemonsets-679fa81fd20e.
my daemonset:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: installer
namespace: node-installer
spec:
selector:
matchLabels:
job: installer
template:
metadata:
labels:
job: installer
spec:
hostPID: true
restartPolicy: Always
containers:
- image: patnaikshekhar/node-installer:1.3
name: installer
securityContext:
privileged: true
volumeMounts:
- name: install-script
mountPath: /tmp
- name: host-mount
mountPath: /host
volumes:
- name: install-script
configMap:
name: sample-installer-config
- name: host-mount
hostPath:
path: /tmp/install
and this is my configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: sample-installer-config
namespace: node-installer
data:
install.sh: |
#!/bin/bash
# Update and install packages
apt-get update
apt-get install nfs-kernel-server -y
Letting the pods build and complete (x3), when I inspect their logs, only the first pod's logs show the package installed on the node. The rest of the pods have no log at all. Is there a way to reliably accomplish this?
The way to accomplish this is with a daemonset and configmap, with a pod execution policy of privileged and a host path mount. That way the container can install the required packages onto the host node, and the daemonset will apply the configuration to all new nodes on the cluster.

Setting up a React application and NodeJS backend in Kubernetes?

I am trying to setup a sample React application wired to a NodeJS backend as two pods in Kubernetes. This is the (mostly) the default CRA and NodeJS application with Express i.e. npx create-react-app my_app.
Both application runs fine locally through yarn start and npm app.js respectively. The React application uses a proxy defined in package.json to communicate with the NodeJS back-end.
React package.json
...
"proxy": "http://localhost:3001/"
...
React Dockerfile
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN yarn
COPY . .
CMD [ "yarn", "start" ]
NodeJS Dockerfile
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3001
CMD [ "node", "app.js" ]
ui-deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sample-ui
namespace: my_namespace
spec:
replicas: 1
selector:
matchLabels:
app: my_namespace
component: sample-ui
template:
metadata:
labels:
app: my_namespace
component: sample-ui
spec:
containers:
-
name: sample-ui
image: xxx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
name: http
protocol: TCP
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
server-deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sample-server
namespace: my_namespace
spec:
replicas: 1
selector:
matchLabels:
app: my_namespace
component: sample-server
template:
metadata:
labels:
app: my_namespace
component: sample-server
spec:
containers:
-
name: sample-server
image: xxx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3001
name: http
protocol: TCP
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
ui-service
apiVersion: v1
kind: Service
metadata:
name: sample-ui
namespace: my_namespace
labels: {app: sample-ui}
spec:
type: LoadBalancer
selector:
component: sample-ui
ports:
- name: listen
protocol: TCP
port: 3000
server-service
apiVersion: v1
kind: Service
metadata:
name: sample-server
namespace: my_namespace
labels: {app: sample-server}
spec:
selector:
component: sample-server
ports:
- name: listen
protocol: TCP
port: 3001
Both services run fine on my system.
get svc
sample-server ClusterIP 10.19.255.171 <none> 3001/TCP 26m
sample-ui LoadBalancer 10.19.242.42 34.82.235.125 3000:31074/TCP 26m
However, my deployment for the CRA crashes multiple time despite indicating it is still running.
get pods
sample-server-598776c5fc-55jsz 1/1 Running 0 42m
sample-ui-c75ccb746-qppk2 1/1 Running 4 2m38s
I suspect that my React Dockerfile is improperly configured but I'm not sure how to write it to work with a NodeJS backend in kubernetes.
a) How can I setup my Dockerfile for my CRA such that it will run in a pod?
b) How can I setup my docker services and pods such that they communicate?
You will have to use come API gateway in front of your server or you can use ambassador from kubernetes.
Then you can get your client connected to server.
a) How can I setup my Dockerfile for my CRA such that it will run in a
pod?
React docker file is looking good you need to check why container of pod is failing.
Using kubectl describe pod <POD name> or debug more logs using the command kubectl logs <pod name>
How can I setup my docker services and pods such that they
communicate?
For this, you are on right track, how server and frontend will communicate in Kubernetes using the service name.
This might weird at first level but Kubernetes DNS takes care of it.
How if you have two service frontend (sample-ui) and backend (sample-server)
sample-ui will send the request to sample-server so they get connected that way.
You can also try this by going inside the sample-ui POD(container)
kubect exec -it sample-ui-c75ccb746-qppk2 -- /bin/bash
now you are inside of sample-ui container let's send request to sample-server from here
if curl not exist you can install it using the apk install curl or apt-get install curl or yum install curl
curl http://sample-server:3001
Magic you might see response from server.
So your while flow goes like
user coming to frontend load balancer service > calling sample-ui service > internally inside kubernetes cluster now your sample-ui calling the sample-server
All the service that you create inside the K8s will be accesible by it's name.

How to deploy a node.js with redis on kubernetes?

I have a very simple node.js application (HTTP service), which "talks" to redis. I want to create a deployment and run it with minikube.
From my understanding, I need a kubernetes Pod for my app, based on the docker image. Here's my Dockerfile:
FROM node:8.9.1
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["npm", "start"]
I build the docker image with docker build -t my-app .
Next, I created a Pod definition for my app's Pod:
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
So far, so good. But from now on, I have no clear idea how to proceed with redis:
should redis be another Pod, or a Service (in terms of Kubernetes kind)?
How do I reference redis from inside my app? Based on whether redis will be defined as a Pod/Service, how do I obtain a connection URL and port? I read about environment variables being created by Kubernetes, but I am not sure whether these work for Pods or Services.
How do I aggregate both (my app & redis) under single configuration? How do I make sure that redis starts first, then my app (which requires running redis instance), and how do I expose my HTTP endpoints to the "outside world"? I read about Deployments, but I am not sure how to connect these pieces together.
Ideally, I would like to have all configurations inside YAML files, so that at the end of the day the whole infrastructure could be started with a single command.
I think I figured out a solution (using a Deployment and a Service).
For my deployment, I used two containers (webapp + redis) within one Pod, since it doesn't make sense for a webapp to run without active redis instance, and additionally it connects to redis upon application start. I could be wrong in this reasoning, so feel free to correct me if you think otherwise.
Here's my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
spec:
selector:
matchLabels:
app: my-app-deployment
template:
metadata:
labels:
app: my-app-deployment
spec:
containers:
- name: redis
image: redis:latest
ports:
- containerPort: 6379
volumeMounts:
- mountPath: /srv/www
name: redis-storage
- name: my-app
image: my-app:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
volumes:
- name: redis-storage
emptyDir: {}
And here's the Service definition:
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
ports:
- port: 8080
protocol: TCP
type: NodePort
selector:
app: my-app-deployment
I create the deployment with:
kubectl create -f deployment.yaml
Then, I create the service with kubectl create -f service.yaml
I read the IP with minikube ip and extract the port from the output of kubectl describe service my-app-service.
I agree with all of the previous answers. I'm just trying to things more simple by executing a single command.
First, create necessary manifests for redis in a file say redis.yaml and service to expose it outside.
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: node-redis
spec:
ports:
- name: redis
port: 6379
targetPort: 6379
type: NodePort
selector:
app: node-redis
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
selector:
matchLabels:
app: node-redis
replicas: 1
template:
metadata:
labels:
app: node-redis
spec:
containers:
- name: redis
image: redis:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 6379
# data volume where redis writes data
volumeMounts:
- name: data
mountPath: /data
readOnly: false
volumes:
- name: data
persistentVolumeClaim:
claimName: redis-data
---
# data volume
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-data
labels:
app: node-redis
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
Next put manifests for your app in another file say my-app.yaml. Here i put the volume field so that you can use the data that stored by redis.
apiVersion: v1
kind: Pod
metadata:
name: my-app
labels:
app: node-redis
spec:
containers:
- name: my-app
image: my-app:latest
ports:
- containerPort: 8080
# data volume from where my-app read data those are written by redis
volumeMounts:
- name: data
mountPath: /data
readOnly: false
volumes:
- name: data
persistentVolumeClaim:
claimName: redis-data
Now we can use the following bash file my-app.sh.
#!/bin/bash
kubectl create -f redis.yaml
pod_name=$(kubectl get po -l app=node-redis | grep app-with-redis | awk '{print $1}')
# check whether redis server is ready or not
while true; do
pong=$(kubectl exec -it $pod_name -c redis redis-cli ping)
if [[ "$pong" == *"PONG"* ]]; then
echo ok;
break
fi
done
kubectl create -f my-app.yaml
Just run chmod +x my-app.sh; ./my-app.sh to deploy. To get the url run minikube service redis --url. You can similarly get the url for your app. The only thing is you need a nodePort type service for your app to access it from outside of the cluster.
So, everything is in your hand now.
I would run redis in a separate pod (i.e.: so your web app doesn't take down the redis server if itself crashes).
Here is your redis deployment & service:
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
selector:
matchLabels:
app: redis
replicas: 1
template:
metadata:
labels:
app: redis
spec:
volumes:
- name: host-sys
hostPath:
path: /sys
initContainers:
- name: disable-thp
image: redis:4.0-alpine
volumeMounts:
- name: host-sys
mountPath: /host-sys
command: ["sh", "-c", "echo never > /host-sys/kernel/mm/transparent_hugepage/enabled"]
containers:
- name: redis
image: redis:4.0-alpine
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 350m
memory: 1024Mi
ports:
- containerPort: 6379
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: redis
spec:
ports:
- port: 6379
name: redis
selector:
app: redis
Since we've exposed a kubernetes Service you can then access your redis instance by hostname, or it's "service name", which is redis.
You can check out my kubernetes redis repository at https://github.com/mateothegreat/k8-byexamples-redis. You can simply run make install if you want the easier route.
Good luck and if you're still stuck please reach out!
yes you need a separete deployement and service for redis
use kubernetes service discovery , should be built in , KubeDNS , CoreDNS
use readniness and liveness probes
Yes , you can write a single big yaml file to describe all the deployments and services. then:
kubectl apply -f yourfile.yml
or you can place the yaml in separate files and then do the :
kubectl apply -f dir/
I recommend you to read further the k8s docs, but in general re your questions raised above:
Yes another pod (with the relevant configuration) and an additional service depends on your use case, check this great example: https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/
Using services, read more here: https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
There are several ways to manage dependencies - search for deployment dependencies, but in general you can append them in the same file with readiness endpoint and expose using a Service - read more in the link in bullet 2

Resources