Fastify not working on Docker / Kubernetes - node.js

I have very simple app that returns "Hello World" string, it works fine locally. As you will see from app code below it runs on port 4000. When I create docker image and run a container, I can't access it from localhost:4000 on my machine, but I can see that docker got to node index.js command correctly and app is running without any errors.
I also tried to deploy it to Kubernetes cluster, when I access load balancer ip I get ERR_EMPTY_RESPONSE. After inspecting this app through kubectl I can see that everything is running fine, image was downloaded and pod is running.
I'm struggling to understand what I missed and why it only works locally.
NodeJS app
import fastify from 'fastify';
const server = fastify();
server.get('/', (_request, reply) => {
reply.status(200).send("Hello World");
});
server.listen(4000, error => {
if (error) {
process.exit(1);
}
});
Dockerfile
FROM node:14.2-alpine
WORKDIR /app
COPY package.json yarn.lock /app/
RUN yarn
COPY . .
EXPOSE 4000
CMD ["node", "index.js"]
Kubernetes manifest
---
# Load balancer
apiVersion: v1
kind: Service
metadata:
name: development-actions-lb
annotations:
service.beta.kubernetes.io/do-loadbalancer-name: "development-actions-lb"
service.beta.kubernetes.io/do-loadbalancer-algorithm: "round_robin"
spec:
type: LoadBalancer
selector:
app: development-actions
ports:
- name: http
protocol: TCP
port: 80
targetPort: 4000
---
# Actions deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: development-actions
spec:
replicas: 1
selector:
matchLabels:
app: development-actions
template:
metadata:
labels:
app: development-actions
spec:
containers:
- image: registry.digitalocean.com/myapp/my-image:latest
name: development-actions
ports:
- containerPort: 4000
protocol: TCP
imagePullSecrets:
- name: registry-myapp

First when I tried your code, I try it using a local docker but the behavior is just the same, so I expect it to be ebcause of the fact that fastify by default only listen to localhost.
docker build -t development-actions:latest .
docker run -it -p 4000:4000 development-actions:latest
Inside of Docker you should mentioned explicitly '0.0.0.0' since By default fastify is istening only on the localhost 127.0.0.1 interface. To listen on all available IPv4 interfaces the example should be modified to listen on 0.0.0.0 like so I change it to the following:
const server = require('fastify')({ logger: true })
server.get('/', (_request, reply) => {
reply.status(200).send("Hello World");
});
server.listen(4000, '0.0.0.0', error => {
if (error) {
process.exit(1);
}
});
The rest should be the same. To try it locally you can do it by using:
Reference:
https://www.fastify.io/docs/latest/Getting-Started/#your-first-server

Related

Getting Connection refused while trying to access service from kubernetes pod

I am new to kubernetes and I am trying to learn it by deploying a simple node server using AWS EKS. (kubernetes is alreay setup to talk to the created AWS EKS cluster)
Here is code for my simple node file (server.js)
const express = require('express')
const app = express()
const port = 8080
app.get('/', (req, res) => {
res.send('Hello World!')
})
app.listen(port, () => {
console.log(`Example app listening at http://localhost:${port}`)
})
Here is how the Dockerfile looks like:
FROM node:12
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
COPY package*.json ./
RUN npm ci
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
I am able to run the above server in my local by creating a docker image.
Now, in order to deploy this server here are the steps that I followed:
First, I pushed the above image to doceker hub (aroraankit7/simple-server)
Second, I created a deployment.yaml file which looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-server-app
labels:
app: simple-server-app
spec:
replicas: 2
selector:
matchLabels:
app: simple-server-app
template:
metadata:
labels:
app: simple-server-app
spec:
containers:
- name: simple-server
image: aroraankit7/simple-server:v1
ports:
- containerPort: 8080
Third, I deployed this using kubectl apply command. Here is the output for kubectl get pods
Then, I created the service.yaml file. Here is how it looks:
apiVersion: v1
kind: Service
metadata:
name: simple-server-svc
labels:
run: simple-server
spec:
ports:
- port: 8080
targetPort: 8080
protocol: TCP
selector:
run: simple-server
I then deployed this using the kubectl apply command. Output for kubectl describe services:
Next, I logged into on of my pods by using the command: kubectl -it exec simple-server-app-758dfb88f4-4cfmp bash
While inside this pod, I ran the following the command: curl http://simple-server-svc:8080 and this was the output that I got: curl: (7) Failed to connect to simple-server-svc port 8080: Connection refused
Why is the connection getting refused?
When I am running curl http:localhost://8080, I am getting the right output (Hello World! in this case)
Your service is not bound to the deployment. You need to modify the selector in your service.yaml to the following:
apiVersion: v1
kind: Service
metadata:
name: simple-server-svc
labels:
run: simple-server
spec:
ports:
- port: 8080
targetPort: 8080
protocol: TCP
selector:
app: simple-server-app
You can use kubectl expose command to avoid mistakes like this.

Ports not receiving fetch request in k8s

I have a k8s cluster running locally on an arm Mac. I have client and server pods. The client is a React frontend. The server is an express server connecting to a mongodb Atlas cluster.
So far the images build fine and all pods are running.
The problem is the internal port routing is not working. All I see is
GET http://localhost:5000/ net::ERR_CONNECTION_REFUSED
And the referrer policy in the networking tab suggests a CORS error:
Referrer Policy: strict-origin-when-cross-origin
I'm not sure what I need to do to get my client to fetch on a certain port? So far I have this, which works outside of k8s:
const getAllUsers = () => {
fetch("http://localhost:5000/")
.then((res) => res.text())
.then((res) => {
return setUsers(JSON.parse(res));
});
};
The server has this code to handle the request:
app.get("/", (req, res) => {
usersCollection
.find()
.toArray()
.then((results) => {
res.json(results);
})
.catch((error) => console.error(error));
});
My server cluster ip service is like this:
apiVersion: v1
kind: Service
metadata:
name: server-cluster-ip-service
spec:
type: ClusterIP
selector:
component: server
ports:
- port: 5000
targetPort: 5000
And my server deployment is like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
spec:
replicas: 3
selector:
matchLabels:
component: server
template:
metadata:
labels:
component: server
spec:
containers:
- name: server
image: mydocker/k8s-server
ports:
- containerPort: 5000
env:
- name: REDIS_HOST
value: redis-cluster-ip-service
- name: REDIS_PORT
value: "6379"
When I log out the server deployment logs I get:
> express_mongodb#1.0.0 dev /app
> nodemon server.js
[nodemon] 2.0.12
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): *.*
[nodemon] watching extensions: js,mjs,json
[nodemon] starting `node server.js`
Connected to Database
listening on 5000
Which is great - it's listening on the port I specified with
app.listen(PORT, function () {
console.log("listening on 5000");
});
There's something I'm not getting here. The first thing I want to do is make sure my server is connected to my client on port 5000 - what am I doing wrong?
EDIT: After a long time looking at CORS error fixes, maybe it isn't a CORS error? I curl the service ip with:
curl my.ip.##.##
Then try:
curl localhost:5000
But the requests time out.
I use kubectl describe service server-cluster-ip-service
and get back
Name: server-cluster-ip-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: component=server
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.###.##.##
IPs: 10.###.##.##
Port: <unset> 5000/TCP
TargetPort: 5000/TCP
Endpoints: 172.##.#.##:5000,172.##.#.##,172.##.#.##
Session Affinity: None
Events: <none>
Ideally, you should be using the service name to communicate with different services inside the cluster.
If you react server want to talk with the express server, you have to the express-service-name as host into the react service. Kubernetes will auto manage the resolution.
so for you, it will be server-cluster-ip-service
const getAllUsers = () => {
fetch("http://server-cluster-ip-service:5000/")
.then((res) => res.text())
.then((res) => {
return setUsers(JSON.parse(res));
});
};
As in a single machine or in a host all services can talk to each other on localhost the same way in Kubernetes cluster services uses the service name to solve each other.
Reference document DNS for services & POD: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
You express cluster POD or application will run on 0.0.0.0/0 and port 5000 and there will be one Kubernetes service as your created now with target port 5000.
Your client or other application internally will call this service by service name and request redirected to container (POD) of the express server.

How to route all traffic from a container through another container in the same Kubernetes pod?

I'm creating a web application that comprises a React frontend and a node.js (express) server. The frontend makes an internal api call to the express server and the express server then makes an external api call to gather some data. The frontend and the server are in different containers within the same Kubernetes pod.
The frontend service is an nginx:1.14.0-alpine image. The static files are built (npm build) in a CI pipeline and the build directory is copied to the image during docker build. The package.json contains a proxy key, "proxy": "http://localhost:8080", that routes traffic from the app to localhost:8080 - which is the port that the express server is listening on for an internal api call. I think the proxy key will have no bearing once the files are packaged into static files and served up onto an nginx image?
When running locally, i.e. running npm start instead of npm build, this all works. The express server picks up the api requests sent out by the frontend on port 8080.
The express server is a simple service that adds authentication to the api call that the frontend makes, that is all. But the authentication relies on secrets as environment variables, making them incompatible with React. The server is started by running node server.js; locally the server service successfully listens (app.listen(8080))to the api calls from the React frontend, adds some authentication to the request, then makes the the request to the external api and passes the response back to the frontend once it is received.
In production, in a Kubernetes pod, things aren't so simple. The traffic from the React frontend proxying through the node server needs to be handled by kubernetes now, and I haven't been able to figure it out.
It may be important to note that there are no circumstances in which the frontend will make any external api calls directly, they will all go through the server.
React frontend Dockerfile
FROM nginx:1.14.0-alpine
# Copy static files
COPY client/build/ /usr/share/nginx/html/
# The rest has been redacted for brevity but is just copying of favicons etc.
Express Node Server
FROM node:10.16.2-alpine
# Create app directory
WORKDIR /app
# Install app dependencies
COPY server/package*.json .
RUN npm install
EXPOSE 8080
CMD [ "node", "server.js" ]
Kubernetes Manifest - Redacted for brevity
apiVersion: apps/v1beta1
kind: Deployment
containers:
- name: frontend
image: frontend-image:1.0.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/conf.d/default.conf
name: config-dir
subPath: my-config.conf
- name: server
image: server-image:1.0.0
imagePullPolicy: IfNotPresent
volumes:
- name: config-tmpl
configMap:
name: app-config
defaultMode: 0744
- name: my-config-directory
emptyDir: {}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: my-namespace
data:
my-conf.conf: |-
server {
listen 80;
server_name _;
location api/ {
proxy_pass http://127.0.0.1:8080/;
}
.....
In Kubernetes, thes pod share the same network interface with all container inside it, so for the frontend container localhost:8080 is the backend, and for the backend container localhost:80 is the frontend.
As for any container application, you should ensure that they are listening in other interfaces than 127.0.0.1 if you want traffic from outside.
Migrating an aplication from one server - where every application talks from 127.0.0.1 - to a pod was intended to be simple as in a dedicated machine.
Your nginx.conf looks a little bit strange, should be location /api/ {.
Here is functional example:
nginx.conf
server {
server_name localhost;
listen 0.0.0.0:80;
error_page 500 502 503 504 /50x.html;
location / {
root html;
}
location /api/ {
proxy_pass http://127.0.0.1:8080/;
}
}
Create this ConfigMap:
kubectl create -f nginx.conf
app.js
const express = require('express')
const app = express()
const port = 8080
app.get('/', (req, res) => res.send('Hello from Express!'))
app.listen(port, () => console.log(`Example app listening on port ${port}!`))
Dockerfile
FROM alpine
RUN apk add nodejs npm && mkdir -p /app
COPY . /app
WORKDIR /app
RUN npm install express --save
EXPOSE 8080
CMD node app.js
You can build this image or use the one I've made hectorvido/express.
Then, create the pod YAML definition:
pod.yml
apiVersion: v1
kind: Pod
metadata:
name: front-back
labels:
app: front-back
spec:
containers:
- name: front
image: nginx:alpine
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/conf.d/
ports:
- containerPort: 80
- name: back
image: hectorvido/express
ports:
- containerPort: 8080
volumes:
- name: nginx-conf
configMap:
name: nginx
Put on the cluster:
kubectl create -f pod.yml
Get the IP:
kubectl get pods -o wide
I tested with Minikube, so if the pod IP was 172.17.0.7 I have to do:
minikube ssh
curl -L 172.17.0.7/api
If you had an ingress in the front, it should still working. I enabled an nginx ingress controller on minikube, so we need to create a service and a ingress:
service
kubectl expose pod front-back --port 80
ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: front-back
spec:
rules:
- host: fb.192-168-39-163.nip.io # minikube ip
http:
paths:
- path: /
backend:
serviceName: front-back
servicePort: 80
The test still works:
curl -vL http://fb.192-168-39-163.nip.io/api/

Kubernetes NodePort is not accessbile outside. Connection refused

Hi I am trying to learn kubernetes.
what I am trying to do this using minikube and following are what I did:
1.) Write a simple server using Node
2.) Write a Dockerfile for that particular Node server
3.) Create a kubernetes deployment
4.) Create a service (of type ClusterIP)
5.) Create a service (of type NodePort) to expose the container so I can access from outside (browser, curl)
But when I try to connect to the NodePort with the format of <NodePort>:<port> and `:, it gives an error Failed to connect to 192.168.39.192 port 80: Connection refused
These are the files I created as steps mentioned above (1-5).
1.) server.js - Here only I have mentioned server.js, relavent package.json exists and they work as expected when I run the server locally (without deploying it in docker), I told this in case you might ask questions if my server works correctly, yes it does :)
'use strict';
const express = require('express');
// Constants
const PORT = 8080;
const HOST = '0.0.0.0';
// App
const app = express();
app.get('/', (req, res) => {
res.send('Hello world\n');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
2.) Dockerfile
FROM node:10
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "npm", "start" ]
3.) deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-web-app
spec:
replicas: 2
selector:
matchLabels:
name: node-web-app
template:
metadata:
labels:
# you can specify any labels you want here
name: node-web-app
spec:
containers:
- name: node-web-app
# image must be the same as you built before (name:tag)
image: banuka/node-web-app
ports:
- name: http
containerPort: 8080
protocol: TCP
imagePullPolicy: Never
terminationGracePeriodSeconds: 60
4.) clusterip.yaml
kind: Service
apiVersion: v1
metadata:
labels:
# these labels can be anything
name: node-web-app-clusterip
name: node-web-app-clusterip
spec:
selector:
app: node-web-app
ports:
- protocol: TCP
port: 80
# target is the port exposed by your containers (in our example 8080)
targetPort: 8080
5.) NodePort.yaml
kind: Service
apiVersion: v1
metadata:
labels:
name: node-server-nodeport
name: node-server-nodeport
spec:
# this will make the service a NodePort service
type: NodePort
selector:
app: node-app-web
ports:
- protocol: TCP
# new -> this will be the port used to reach it from outside
# if not specified, a random port will be used from a specific range (default: 30000-32767)
nodePort: 32555
port: 80
targetPort: 8080
and when I try to curl from outside or use my web browser it gives following error:
curl: (7) Failed to connect to 192.168.39.192 port 32555: Connection refused
ps: pods and containers are also working as expected.
There are several possible reasons for this.
First: Are you using your local IP or the IP where the minikube VM is running? To verify use minikube ip.
Second: The NodePort service wants to use pods with label app: node-app-web, but your pods only have the label name: node-web-app
Just to make sure the port that you assume is used, check with minikube service list that the requested port was allocated. Check your firewall settings as well.
I had same problem always when I write wrong selectors to NodePort service spec
The Service's selector must match the Pod's label.
In your NodePort.yaml selector is app: node-app-web , while in deployment.yaml label is node-web-app.

Setting up a React application and NodeJS backend in Kubernetes?

I am trying to setup a sample React application wired to a NodeJS backend as two pods in Kubernetes. This is the (mostly) the default CRA and NodeJS application with Express i.e. npx create-react-app my_app.
Both application runs fine locally through yarn start and npm app.js respectively. The React application uses a proxy defined in package.json to communicate with the NodeJS back-end.
React package.json
...
"proxy": "http://localhost:3001/"
...
React Dockerfile
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN yarn
COPY . .
CMD [ "yarn", "start" ]
NodeJS Dockerfile
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3001
CMD [ "node", "app.js" ]
ui-deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sample-ui
namespace: my_namespace
spec:
replicas: 1
selector:
matchLabels:
app: my_namespace
component: sample-ui
template:
metadata:
labels:
app: my_namespace
component: sample-ui
spec:
containers:
-
name: sample-ui
image: xxx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
name: http
protocol: TCP
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
server-deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sample-server
namespace: my_namespace
spec:
replicas: 1
selector:
matchLabels:
app: my_namespace
component: sample-server
template:
metadata:
labels:
app: my_namespace
component: sample-server
spec:
containers:
-
name: sample-server
image: xxx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3001
name: http
protocol: TCP
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
ui-service
apiVersion: v1
kind: Service
metadata:
name: sample-ui
namespace: my_namespace
labels: {app: sample-ui}
spec:
type: LoadBalancer
selector:
component: sample-ui
ports:
- name: listen
protocol: TCP
port: 3000
server-service
apiVersion: v1
kind: Service
metadata:
name: sample-server
namespace: my_namespace
labels: {app: sample-server}
spec:
selector:
component: sample-server
ports:
- name: listen
protocol: TCP
port: 3001
Both services run fine on my system.
get svc
sample-server ClusterIP 10.19.255.171 <none> 3001/TCP 26m
sample-ui LoadBalancer 10.19.242.42 34.82.235.125 3000:31074/TCP 26m
However, my deployment for the CRA crashes multiple time despite indicating it is still running.
get pods
sample-server-598776c5fc-55jsz 1/1 Running 0 42m
sample-ui-c75ccb746-qppk2 1/1 Running 4 2m38s
I suspect that my React Dockerfile is improperly configured but I'm not sure how to write it to work with a NodeJS backend in kubernetes.
a) How can I setup my Dockerfile for my CRA such that it will run in a pod?
b) How can I setup my docker services and pods such that they communicate?
You will have to use come API gateway in front of your server or you can use ambassador from kubernetes.
Then you can get your client connected to server.
a) How can I setup my Dockerfile for my CRA such that it will run in a
pod?
React docker file is looking good you need to check why container of pod is failing.
Using kubectl describe pod <POD name> or debug more logs using the command kubectl logs <pod name>
How can I setup my docker services and pods such that they
communicate?
For this, you are on right track, how server and frontend will communicate in Kubernetes using the service name.
This might weird at first level but Kubernetes DNS takes care of it.
How if you have two service frontend (sample-ui) and backend (sample-server)
sample-ui will send the request to sample-server so they get connected that way.
You can also try this by going inside the sample-ui POD(container)
kubect exec -it sample-ui-c75ccb746-qppk2 -- /bin/bash
now you are inside of sample-ui container let's send request to sample-server from here
if curl not exist you can install it using the apk install curl or apt-get install curl or yum install curl
curl http://sample-server:3001
Magic you might see response from server.
So your while flow goes like
user coming to frontend load balancer service > calling sample-ui service > internally inside kubernetes cluster now your sample-ui calling the sample-server
All the service that you create inside the K8s will be accesible by it's name.

Resources