Getting Connection refused while trying to access service from kubernetes pod - node.js

I am new to kubernetes and I am trying to learn it by deploying a simple node server using AWS EKS. (kubernetes is alreay setup to talk to the created AWS EKS cluster)
Here is code for my simple node file (server.js)
const express = require('express')
const app = express()
const port = 8080
app.get('/', (req, res) => {
res.send('Hello World!')
})
app.listen(port, () => {
console.log(`Example app listening at http://localhost:${port}`)
})
Here is how the Dockerfile looks like:
FROM node:12
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
COPY package*.json ./
RUN npm ci
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
I am able to run the above server in my local by creating a docker image.
Now, in order to deploy this server here are the steps that I followed:
First, I pushed the above image to doceker hub (aroraankit7/simple-server)
Second, I created a deployment.yaml file which looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-server-app
labels:
app: simple-server-app
spec:
replicas: 2
selector:
matchLabels:
app: simple-server-app
template:
metadata:
labels:
app: simple-server-app
spec:
containers:
- name: simple-server
image: aroraankit7/simple-server:v1
ports:
- containerPort: 8080
Third, I deployed this using kubectl apply command. Here is the output for kubectl get pods
Then, I created the service.yaml file. Here is how it looks:
apiVersion: v1
kind: Service
metadata:
name: simple-server-svc
labels:
run: simple-server
spec:
ports:
- port: 8080
targetPort: 8080
protocol: TCP
selector:
run: simple-server
I then deployed this using the kubectl apply command. Output for kubectl describe services:
Next, I logged into on of my pods by using the command: kubectl -it exec simple-server-app-758dfb88f4-4cfmp bash
While inside this pod, I ran the following the command: curl http://simple-server-svc:8080 and this was the output that I got: curl: (7) Failed to connect to simple-server-svc port 8080: Connection refused
Why is the connection getting refused?
When I am running curl http:localhost://8080, I am getting the right output (Hello World! in this case)

Your service is not bound to the deployment. You need to modify the selector in your service.yaml to the following:
apiVersion: v1
kind: Service
metadata:
name: simple-server-svc
labels:
run: simple-server
spec:
ports:
- port: 8080
targetPort: 8080
protocol: TCP
selector:
app: simple-server-app
You can use kubectl expose command to avoid mistakes like this.

Related

Fastify not working on Docker / Kubernetes

I have very simple app that returns "Hello World" string, it works fine locally. As you will see from app code below it runs on port 4000. When I create docker image and run a container, I can't access it from localhost:4000 on my machine, but I can see that docker got to node index.js command correctly and app is running without any errors.
I also tried to deploy it to Kubernetes cluster, when I access load balancer ip I get ERR_EMPTY_RESPONSE. After inspecting this app through kubectl I can see that everything is running fine, image was downloaded and pod is running.
I'm struggling to understand what I missed and why it only works locally.
NodeJS app
import fastify from 'fastify';
const server = fastify();
server.get('/', (_request, reply) => {
reply.status(200).send("Hello World");
});
server.listen(4000, error => {
if (error) {
process.exit(1);
}
});
Dockerfile
FROM node:14.2-alpine
WORKDIR /app
COPY package.json yarn.lock /app/
RUN yarn
COPY . .
EXPOSE 4000
CMD ["node", "index.js"]
Kubernetes manifest
---
# Load balancer
apiVersion: v1
kind: Service
metadata:
name: development-actions-lb
annotations:
service.beta.kubernetes.io/do-loadbalancer-name: "development-actions-lb"
service.beta.kubernetes.io/do-loadbalancer-algorithm: "round_robin"
spec:
type: LoadBalancer
selector:
app: development-actions
ports:
- name: http
protocol: TCP
port: 80
targetPort: 4000
---
# Actions deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: development-actions
spec:
replicas: 1
selector:
matchLabels:
app: development-actions
template:
metadata:
labels:
app: development-actions
spec:
containers:
- image: registry.digitalocean.com/myapp/my-image:latest
name: development-actions
ports:
- containerPort: 4000
protocol: TCP
imagePullSecrets:
- name: registry-myapp
First when I tried your code, I try it using a local docker but the behavior is just the same, so I expect it to be ebcause of the fact that fastify by default only listen to localhost.
docker build -t development-actions:latest .
docker run -it -p 4000:4000 development-actions:latest
Inside of Docker you should mentioned explicitly '0.0.0.0' since By default fastify is istening only on the localhost 127.0.0.1 interface. To listen on all available IPv4 interfaces the example should be modified to listen on 0.0.0.0 like so I change it to the following:
const server = require('fastify')({ logger: true })
server.get('/', (_request, reply) => {
reply.status(200).send("Hello World");
});
server.listen(4000, '0.0.0.0', error => {
if (error) {
process.exit(1);
}
});
The rest should be the same. To try it locally you can do it by using:
Reference:
https://www.fastify.io/docs/latest/Getting-Started/#your-first-server

Azure kubernetes service loadbalancer external IP not accessible

I am new to the world of Kubernetes and was testing a sample Django "Hello world" app deployment. Using docker-compose I was able to access the hell world page on a browser but I need to use Kubernetes. So I tested two options and none of them worked.
1) I created an Azure CICD pipeline to build and push the image in ACR using the following Dockerfile,
FROM python:3.8
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN mkdir /hello_world
WORKDIR /hello_world
COPY . /hello_world/
RUN pip install -r requirements.txt
CMD [ "python", "manage.py", "runserver", "0.0.0.0:8000" ]
The pipeline completes successfully and uploads the image in the repository.
Now I use kubectl to deploy using the deployment file,
apiVersion: apps/v1
kind: Deployment
metadata:
name: django-helloworld
spec:
replicas: 3
selector:
matchLabels:
app: django-helloworld
template:
metadata:
labels:
app: django-helloworld
spec:
containers:
- name: django-helloworld
image: acrshgpdev1.azurecr.io/django-helloworld:194
#imagePullPolicy: Always
ports:
- containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: django-helloworld-service
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: django-helloworld
The deployment and service are created but when I try to access the external IP of the LB service through a browser the page is inaccessible. I used the external ip:port and it didn't work.
Any thoughts why would this be happening?
2) I used the same Dockerfile but a different deployment file(changed the image to the locally created image & removed LB service) to deploy the app to my local Kubernetes. the deployment file was as follows,
apiVersion: v1
kind: Service
metadata:
name: django-helloworld-service
spec:
selector:
app: django-helloworld
ports:
- protocol: TCP
port: 80
targetPort: 30800
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: django-helloworld
spec:
replicas: 3
selector:
matchLabels:
app: django-helloworld
template:
metadata:
labels:
app: django-helloworld
spec:
containers:
- name: django-helloworld
image: django-helloworld:1.0
#imagePullPolicy: Always
ports:
- containerPort: 8000
It creates the deployment and service but doesn't assign an external IP to the NodePort service so I am not able to figure out what service should I choose to test the app is successful. I know I can't choose a LB as it doesn't go locally and I need to deploy using a cloud service.
just configure your service to be of type LoadBalancer and do a proper port mapping:
apiVersion: v1
kind: Service
metadata:
name: django-helloworld-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8000
selector:
app: django-helloworld
https://kubernetes.io/docs/concepts/services-networking/service/
Make sure the deployment has associated healthy pods too (they show as Running and with 1/1 next to their name). If there aren't, make sure your cluster can successfully pull from acrshgpdev1.azurecr.io registry; you can integrate directly an AKS cluster with an ACR registry following this article:
az aks update -n myAKSCluster -g myResourceGroup --attach-acr acrshgpdev1.azurecr.io
or by adding the SP of the AKS cluster manually to the Reader role on the ACR.

Kubernetes NodePort is not accessbile outside. Connection refused

Hi I am trying to learn kubernetes.
what I am trying to do this using minikube and following are what I did:
1.) Write a simple server using Node
2.) Write a Dockerfile for that particular Node server
3.) Create a kubernetes deployment
4.) Create a service (of type ClusterIP)
5.) Create a service (of type NodePort) to expose the container so I can access from outside (browser, curl)
But when I try to connect to the NodePort with the format of <NodePort>:<port> and `:, it gives an error Failed to connect to 192.168.39.192 port 80: Connection refused
These are the files I created as steps mentioned above (1-5).
1.) server.js - Here only I have mentioned server.js, relavent package.json exists and they work as expected when I run the server locally (without deploying it in docker), I told this in case you might ask questions if my server works correctly, yes it does :)
'use strict';
const express = require('express');
// Constants
const PORT = 8080;
const HOST = '0.0.0.0';
// App
const app = express();
app.get('/', (req, res) => {
res.send('Hello world\n');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
2.) Dockerfile
FROM node:10
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "npm", "start" ]
3.) deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-web-app
spec:
replicas: 2
selector:
matchLabels:
name: node-web-app
template:
metadata:
labels:
# you can specify any labels you want here
name: node-web-app
spec:
containers:
- name: node-web-app
# image must be the same as you built before (name:tag)
image: banuka/node-web-app
ports:
- name: http
containerPort: 8080
protocol: TCP
imagePullPolicy: Never
terminationGracePeriodSeconds: 60
4.) clusterip.yaml
kind: Service
apiVersion: v1
metadata:
labels:
# these labels can be anything
name: node-web-app-clusterip
name: node-web-app-clusterip
spec:
selector:
app: node-web-app
ports:
- protocol: TCP
port: 80
# target is the port exposed by your containers (in our example 8080)
targetPort: 8080
5.) NodePort.yaml
kind: Service
apiVersion: v1
metadata:
labels:
name: node-server-nodeport
name: node-server-nodeport
spec:
# this will make the service a NodePort service
type: NodePort
selector:
app: node-app-web
ports:
- protocol: TCP
# new -> this will be the port used to reach it from outside
# if not specified, a random port will be used from a specific range (default: 30000-32767)
nodePort: 32555
port: 80
targetPort: 8080
and when I try to curl from outside or use my web browser it gives following error:
curl: (7) Failed to connect to 192.168.39.192 port 32555: Connection refused
ps: pods and containers are also working as expected.
There are several possible reasons for this.
First: Are you using your local IP or the IP where the minikube VM is running? To verify use minikube ip.
Second: The NodePort service wants to use pods with label app: node-app-web, but your pods only have the label name: node-web-app
Just to make sure the port that you assume is used, check with minikube service list that the requested port was allocated. Check your firewall settings as well.
I had same problem always when I write wrong selectors to NodePort service spec
The Service's selector must match the Pod's label.
In your NodePort.yaml selector is app: node-app-web , while in deployment.yaml label is node-web-app.

Setting up a React application and NodeJS backend in Kubernetes?

I am trying to setup a sample React application wired to a NodeJS backend as two pods in Kubernetes. This is the (mostly) the default CRA and NodeJS application with Express i.e. npx create-react-app my_app.
Both application runs fine locally through yarn start and npm app.js respectively. The React application uses a proxy defined in package.json to communicate with the NodeJS back-end.
React package.json
...
"proxy": "http://localhost:3001/"
...
React Dockerfile
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN yarn
COPY . .
CMD [ "yarn", "start" ]
NodeJS Dockerfile
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3001
CMD [ "node", "app.js" ]
ui-deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sample-ui
namespace: my_namespace
spec:
replicas: 1
selector:
matchLabels:
app: my_namespace
component: sample-ui
template:
metadata:
labels:
app: my_namespace
component: sample-ui
spec:
containers:
-
name: sample-ui
image: xxx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
name: http
protocol: TCP
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
server-deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sample-server
namespace: my_namespace
spec:
replicas: 1
selector:
matchLabels:
app: my_namespace
component: sample-server
template:
metadata:
labels:
app: my_namespace
component: sample-server
spec:
containers:
-
name: sample-server
image: xxx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3001
name: http
protocol: TCP
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
ui-service
apiVersion: v1
kind: Service
metadata:
name: sample-ui
namespace: my_namespace
labels: {app: sample-ui}
spec:
type: LoadBalancer
selector:
component: sample-ui
ports:
- name: listen
protocol: TCP
port: 3000
server-service
apiVersion: v1
kind: Service
metadata:
name: sample-server
namespace: my_namespace
labels: {app: sample-server}
spec:
selector:
component: sample-server
ports:
- name: listen
protocol: TCP
port: 3001
Both services run fine on my system.
get svc
sample-server ClusterIP 10.19.255.171 <none> 3001/TCP 26m
sample-ui LoadBalancer 10.19.242.42 34.82.235.125 3000:31074/TCP 26m
However, my deployment for the CRA crashes multiple time despite indicating it is still running.
get pods
sample-server-598776c5fc-55jsz 1/1 Running 0 42m
sample-ui-c75ccb746-qppk2 1/1 Running 4 2m38s
I suspect that my React Dockerfile is improperly configured but I'm not sure how to write it to work with a NodeJS backend in kubernetes.
a) How can I setup my Dockerfile for my CRA such that it will run in a pod?
b) How can I setup my docker services and pods such that they communicate?
You will have to use come API gateway in front of your server or you can use ambassador from kubernetes.
Then you can get your client connected to server.
a) How can I setup my Dockerfile for my CRA such that it will run in a
pod?
React docker file is looking good you need to check why container of pod is failing.
Using kubectl describe pod <POD name> or debug more logs using the command kubectl logs <pod name>
How can I setup my docker services and pods such that they
communicate?
For this, you are on right track, how server and frontend will communicate in Kubernetes using the service name.
This might weird at first level but Kubernetes DNS takes care of it.
How if you have two service frontend (sample-ui) and backend (sample-server)
sample-ui will send the request to sample-server so they get connected that way.
You can also try this by going inside the sample-ui POD(container)
kubect exec -it sample-ui-c75ccb746-qppk2 -- /bin/bash
now you are inside of sample-ui container let's send request to sample-server from here
if curl not exist you can install it using the apk install curl or apt-get install curl or yum install curl
curl http://sample-server:3001
Magic you might see response from server.
So your while flow goes like
user coming to frontend load balancer service > calling sample-ui service > internally inside kubernetes cluster now your sample-ui calling the sample-server
All the service that you create inside the K8s will be accesible by it's name.

Is it possible to debug a NodeJS app using Minikube via node-inspector?

I am running minikube on my mac for developing/testing my micro-services locally.
Is it possible to debug my NodeJS in minikube via node-inspector (other tools are also welcome)?
I saw that there is an option to use node-inspector using docker-compose but since I am running all my services in k8s I choose Minikube.
Say you have this npm script:
"dev": "concurrently -p \"[{name}]\" -n \"NODE INSPECTOR,NODEMON\" -c \"bgBlue.bold,bgGreen.bold\" \"node-inspector --web-port=8081 --debug-port=5860 --preload\" \"cross-env NODE_ENV=development nodemon ./node_modules/babel-cli/bin/babel-node.js --max-old-space-size=512 --debug=5860 ./index.js\""
node-inspector is not running on port 8081.
Now in your kubernetes.yml you could have the following:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: helloworld
name: helloworld
namespace: application
spec:
replicas: 1
selector:
matchLabels:
app: helloworld
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
imagePullPolicy: Always
image: fbgrecojr/hello-world:latest
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8081
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
labels:
app: helloworld
name: helloworld
namespace: application
spec:
type: NodePort
ports:
- port: 8080
protocol: TCP
nodePort: 30000
- port: 8081
protocol: TCP
nodePort: 30001
selector:
app: helloworld
your app is not accessible from $(minikube ip):30000 and node inspector is available from $(minikube ip):30000

Resources