Kubernetes NodePort is not accessbile outside. Connection refused - node.js

Hi I am trying to learn kubernetes.
what I am trying to do this using minikube and following are what I did:
1.) Write a simple server using Node
2.) Write a Dockerfile for that particular Node server
3.) Create a kubernetes deployment
4.) Create a service (of type ClusterIP)
5.) Create a service (of type NodePort) to expose the container so I can access from outside (browser, curl)
But when I try to connect to the NodePort with the format of <NodePort>:<port> and `:, it gives an error Failed to connect to 192.168.39.192 port 80: Connection refused
These are the files I created as steps mentioned above (1-5).
1.) server.js - Here only I have mentioned server.js, relavent package.json exists and they work as expected when I run the server locally (without deploying it in docker), I told this in case you might ask questions if my server works correctly, yes it does :)
'use strict';
const express = require('express');
// Constants
const PORT = 8080;
const HOST = '0.0.0.0';
// App
const app = express();
app.get('/', (req, res) => {
res.send('Hello world\n');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
2.) Dockerfile
FROM node:10
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "npm", "start" ]
3.) deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-web-app
spec:
replicas: 2
selector:
matchLabels:
name: node-web-app
template:
metadata:
labels:
# you can specify any labels you want here
name: node-web-app
spec:
containers:
- name: node-web-app
# image must be the same as you built before (name:tag)
image: banuka/node-web-app
ports:
- name: http
containerPort: 8080
protocol: TCP
imagePullPolicy: Never
terminationGracePeriodSeconds: 60
4.) clusterip.yaml
kind: Service
apiVersion: v1
metadata:
labels:
# these labels can be anything
name: node-web-app-clusterip
name: node-web-app-clusterip
spec:
selector:
app: node-web-app
ports:
- protocol: TCP
port: 80
# target is the port exposed by your containers (in our example 8080)
targetPort: 8080
5.) NodePort.yaml
kind: Service
apiVersion: v1
metadata:
labels:
name: node-server-nodeport
name: node-server-nodeport
spec:
# this will make the service a NodePort service
type: NodePort
selector:
app: node-app-web
ports:
- protocol: TCP
# new -> this will be the port used to reach it from outside
# if not specified, a random port will be used from a specific range (default: 30000-32767)
nodePort: 32555
port: 80
targetPort: 8080
and when I try to curl from outside or use my web browser it gives following error:
curl: (7) Failed to connect to 192.168.39.192 port 32555: Connection refused
ps: pods and containers are also working as expected.

There are several possible reasons for this.
First: Are you using your local IP or the IP where the minikube VM is running? To verify use minikube ip.
Second: The NodePort service wants to use pods with label app: node-app-web, but your pods only have the label name: node-web-app
Just to make sure the port that you assume is used, check with minikube service list that the requested port was allocated. Check your firewall settings as well.

I had same problem always when I write wrong selectors to NodePort service spec

The Service's selector must match the Pod's label.
In your NodePort.yaml selector is app: node-app-web , while in deployment.yaml label is node-web-app.

Related

Getting Connection refused while trying to access service from kubernetes pod

I am new to kubernetes and I am trying to learn it by deploying a simple node server using AWS EKS. (kubernetes is alreay setup to talk to the created AWS EKS cluster)
Here is code for my simple node file (server.js)
const express = require('express')
const app = express()
const port = 8080
app.get('/', (req, res) => {
res.send('Hello World!')
})
app.listen(port, () => {
console.log(`Example app listening at http://localhost:${port}`)
})
Here is how the Dockerfile looks like:
FROM node:12
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
COPY package*.json ./
RUN npm ci
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
I am able to run the above server in my local by creating a docker image.
Now, in order to deploy this server here are the steps that I followed:
First, I pushed the above image to doceker hub (aroraankit7/simple-server)
Second, I created a deployment.yaml file which looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-server-app
labels:
app: simple-server-app
spec:
replicas: 2
selector:
matchLabels:
app: simple-server-app
template:
metadata:
labels:
app: simple-server-app
spec:
containers:
- name: simple-server
image: aroraankit7/simple-server:v1
ports:
- containerPort: 8080
Third, I deployed this using kubectl apply command. Here is the output for kubectl get pods
Then, I created the service.yaml file. Here is how it looks:
apiVersion: v1
kind: Service
metadata:
name: simple-server-svc
labels:
run: simple-server
spec:
ports:
- port: 8080
targetPort: 8080
protocol: TCP
selector:
run: simple-server
I then deployed this using the kubectl apply command. Output for kubectl describe services:
Next, I logged into on of my pods by using the command: kubectl -it exec simple-server-app-758dfb88f4-4cfmp bash
While inside this pod, I ran the following the command: curl http://simple-server-svc:8080 and this was the output that I got: curl: (7) Failed to connect to simple-server-svc port 8080: Connection refused
Why is the connection getting refused?
When I am running curl http:localhost://8080, I am getting the right output (Hello World! in this case)
Your service is not bound to the deployment. You need to modify the selector in your service.yaml to the following:
apiVersion: v1
kind: Service
metadata:
name: simple-server-svc
labels:
run: simple-server
spec:
ports:
- port: 8080
targetPort: 8080
protocol: TCP
selector:
app: simple-server-app
You can use kubectl expose command to avoid mistakes like this.

Fastify not working on Docker / Kubernetes

I have very simple app that returns "Hello World" string, it works fine locally. As you will see from app code below it runs on port 4000. When I create docker image and run a container, I can't access it from localhost:4000 on my machine, but I can see that docker got to node index.js command correctly and app is running without any errors.
I also tried to deploy it to Kubernetes cluster, when I access load balancer ip I get ERR_EMPTY_RESPONSE. After inspecting this app through kubectl I can see that everything is running fine, image was downloaded and pod is running.
I'm struggling to understand what I missed and why it only works locally.
NodeJS app
import fastify from 'fastify';
const server = fastify();
server.get('/', (_request, reply) => {
reply.status(200).send("Hello World");
});
server.listen(4000, error => {
if (error) {
process.exit(1);
}
});
Dockerfile
FROM node:14.2-alpine
WORKDIR /app
COPY package.json yarn.lock /app/
RUN yarn
COPY . .
EXPOSE 4000
CMD ["node", "index.js"]
Kubernetes manifest
---
# Load balancer
apiVersion: v1
kind: Service
metadata:
name: development-actions-lb
annotations:
service.beta.kubernetes.io/do-loadbalancer-name: "development-actions-lb"
service.beta.kubernetes.io/do-loadbalancer-algorithm: "round_robin"
spec:
type: LoadBalancer
selector:
app: development-actions
ports:
- name: http
protocol: TCP
port: 80
targetPort: 4000
---
# Actions deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: development-actions
spec:
replicas: 1
selector:
matchLabels:
app: development-actions
template:
metadata:
labels:
app: development-actions
spec:
containers:
- image: registry.digitalocean.com/myapp/my-image:latest
name: development-actions
ports:
- containerPort: 4000
protocol: TCP
imagePullSecrets:
- name: registry-myapp
First when I tried your code, I try it using a local docker but the behavior is just the same, so I expect it to be ebcause of the fact that fastify by default only listen to localhost.
docker build -t development-actions:latest .
docker run -it -p 4000:4000 development-actions:latest
Inside of Docker you should mentioned explicitly '0.0.0.0' since By default fastify is istening only on the localhost 127.0.0.1 interface. To listen on all available IPv4 interfaces the example should be modified to listen on 0.0.0.0 like so I change it to the following:
const server = require('fastify')({ logger: true })
server.get('/', (_request, reply) => {
reply.status(200).send("Hello World");
});
server.listen(4000, '0.0.0.0', error => {
if (error) {
process.exit(1);
}
});
The rest should be the same. To try it locally you can do it by using:
Reference:
https://www.fastify.io/docs/latest/Getting-Started/#your-first-server

How to route all traffic from a container through another container in the same Kubernetes pod?

I'm creating a web application that comprises a React frontend and a node.js (express) server. The frontend makes an internal api call to the express server and the express server then makes an external api call to gather some data. The frontend and the server are in different containers within the same Kubernetes pod.
The frontend service is an nginx:1.14.0-alpine image. The static files are built (npm build) in a CI pipeline and the build directory is copied to the image during docker build. The package.json contains a proxy key, "proxy": "http://localhost:8080", that routes traffic from the app to localhost:8080 - which is the port that the express server is listening on for an internal api call. I think the proxy key will have no bearing once the files are packaged into static files and served up onto an nginx image?
When running locally, i.e. running npm start instead of npm build, this all works. The express server picks up the api requests sent out by the frontend on port 8080.
The express server is a simple service that adds authentication to the api call that the frontend makes, that is all. But the authentication relies on secrets as environment variables, making them incompatible with React. The server is started by running node server.js; locally the server service successfully listens (app.listen(8080))to the api calls from the React frontend, adds some authentication to the request, then makes the the request to the external api and passes the response back to the frontend once it is received.
In production, in a Kubernetes pod, things aren't so simple. The traffic from the React frontend proxying through the node server needs to be handled by kubernetes now, and I haven't been able to figure it out.
It may be important to note that there are no circumstances in which the frontend will make any external api calls directly, they will all go through the server.
React frontend Dockerfile
FROM nginx:1.14.0-alpine
# Copy static files
COPY client/build/ /usr/share/nginx/html/
# The rest has been redacted for brevity but is just copying of favicons etc.
Express Node Server
FROM node:10.16.2-alpine
# Create app directory
WORKDIR /app
# Install app dependencies
COPY server/package*.json .
RUN npm install
EXPOSE 8080
CMD [ "node", "server.js" ]
Kubernetes Manifest - Redacted for brevity
apiVersion: apps/v1beta1
kind: Deployment
containers:
- name: frontend
image: frontend-image:1.0.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/conf.d/default.conf
name: config-dir
subPath: my-config.conf
- name: server
image: server-image:1.0.0
imagePullPolicy: IfNotPresent
volumes:
- name: config-tmpl
configMap:
name: app-config
defaultMode: 0744
- name: my-config-directory
emptyDir: {}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: my-namespace
data:
my-conf.conf: |-
server {
listen 80;
server_name _;
location api/ {
proxy_pass http://127.0.0.1:8080/;
}
.....
In Kubernetes, thes pod share the same network interface with all container inside it, so for the frontend container localhost:8080 is the backend, and for the backend container localhost:80 is the frontend.
As for any container application, you should ensure that they are listening in other interfaces than 127.0.0.1 if you want traffic from outside.
Migrating an aplication from one server - where every application talks from 127.0.0.1 - to a pod was intended to be simple as in a dedicated machine.
Your nginx.conf looks a little bit strange, should be location /api/ {.
Here is functional example:
nginx.conf
server {
server_name localhost;
listen 0.0.0.0:80;
error_page 500 502 503 504 /50x.html;
location / {
root html;
}
location /api/ {
proxy_pass http://127.0.0.1:8080/;
}
}
Create this ConfigMap:
kubectl create -f nginx.conf
app.js
const express = require('express')
const app = express()
const port = 8080
app.get('/', (req, res) => res.send('Hello from Express!'))
app.listen(port, () => console.log(`Example app listening on port ${port}!`))
Dockerfile
FROM alpine
RUN apk add nodejs npm && mkdir -p /app
COPY . /app
WORKDIR /app
RUN npm install express --save
EXPOSE 8080
CMD node app.js
You can build this image or use the one I've made hectorvido/express.
Then, create the pod YAML definition:
pod.yml
apiVersion: v1
kind: Pod
metadata:
name: front-back
labels:
app: front-back
spec:
containers:
- name: front
image: nginx:alpine
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/conf.d/
ports:
- containerPort: 80
- name: back
image: hectorvido/express
ports:
- containerPort: 8080
volumes:
- name: nginx-conf
configMap:
name: nginx
Put on the cluster:
kubectl create -f pod.yml
Get the IP:
kubectl get pods -o wide
I tested with Minikube, so if the pod IP was 172.17.0.7 I have to do:
minikube ssh
curl -L 172.17.0.7/api
If you had an ingress in the front, it should still working. I enabled an nginx ingress controller on minikube, so we need to create a service and a ingress:
service
kubectl expose pod front-back --port 80
ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: front-back
spec:
rules:
- host: fb.192-168-39-163.nip.io # minikube ip
http:
paths:
- path: /
backend:
serviceName: front-back
servicePort: 80
The test still works:
curl -vL http://fb.192-168-39-163.nip.io/api/

k8s not able to reach the database

Here is my docker image
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-alpine3.8 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY ./xyz/publish .
ENV ASPNETCORE_URLS=https://+:443;http://+80
ENTRYPOINT ["dotnet","abc/xyz.dll"]
Here is my Deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: xyzdemo
namespace: default
spec:
replicas: 1
selector:
matchLabels:
papi: web
template:
metadata:
labels:
papi: web
spec:
containers:
- name: xyzdemo-site
image: xyz.azurecr.io/abc:31018
ports:
- containerPort: 443
imagePullSecrets:
- name: secret
---
apiVersion: v1
kind: Service
metadata:
name: xyzdemo-entrypoint
namespace: default
spec:
type: LoadBalancer
selector:
papi: web
ports:
- port: 44328
targetPort: 443
Here is my appsettings file
"Server": "xyz.database.windows.net",
"Database": "pp",
"User": "ita",
"Password": "password",
using all these i deployed the application in to the k8s cluster and am able to open the application from the browser, however when i try to get the info from the database, application is getting the network related error after a while.
System.Data.SqlClient.SqlException (0x80131904): A network-related or
instance-specific error occurred while establishing a connection to
SQL Server
I tried going inside to POD and did the ls command, i can see my application setting file as well as when Cat the application settings, i can see the correct credentials and i dont know what to do and not sure why is not able to connect to the database.
So finally i tried adding the sql connections as the env variables to the pod , then it started working. when i remove those its not connecting.
Now i removed the env variables which has the sql connections then did the log on the pod.
it says can't connect to the database: 'Empty' and server: 'Empty'
not sure why is it taking the empty when it has the details inside the applicationsettings.json file.
Well, I do not see what is the config for your k8's application to connect to database. Importantly, where is your database hosted? How can papi:web connect to database?
I also suspect your service is not having appropriate port re-direction. From your service.yaml above, https port of 443 is internally mapped to 44328. What is 44328? What is listening on that port? Your application seems to have no mention of 44328. (Refer Dockerfile)
I would improvise your service.yaml to look something like this:
apiVersion: v1
kind: Service
metadata:
name: xyzdemo-entrypoint
namespace: default //This is inferred anyways
spec:
selector:
papi: web
ports:
- name: http
protocol: TCP
port: 80
targetPort: xxxx //Where your web-server is listening. (From your dockerfile, this is also 80 but it can be any valid TCP port)
- name: https
protocol: TCP
port: 443
targetPort: xxxx //https for your web-server. (From your dockerfile, this is also 443. Again, can be any TCP port)
Opening up database server to internet is not a good practise. It's a big security threat. Good pattern is to facilitate your web-server communicate to database-server via internal dns that k8's maintain (This is assuming your database server is also a container - something like kubedb. If not, you're database server will have to be available via some sort of proxy that whitelists known hosts and only allows known hosts. eg - cloudsql proxy in GCP).
Depending on how your database server is hosted, you'll have to configure your db config to allow or whitelist your containerised application (The IP you get after applying service.yaml) Only then will your k8's app be able to talk/achieve connectivity to respective db.
I suspect you need to Allow connections on the Azure SQL firewall for this to work. Using the portal would be the easiest way. You can just allow all or allow Azure services for starters (assuming your Kubernetes is inside Azure). And narrow it down later (if this is the culprit).
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-firewall-configure#use-the-azure-portal-to-manage-server-level-ip-firewall-rules

Side-car Traefik container route to ports in Kuberenets

I am running a NodeJS image in my Kubernetes Pod, while exposing a specific port (9080), and running Traefik as a side-car container as reverse proxy. How do I specify Traefik route from the Deployment template.
Deployment
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
app: web
name: web-controller
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- image: "nodeJS-image"
name: web
ports:
- containerPort: 9080
name: http-server
- image: "traefik-image"
name: traefik-proxy
ports:
- containerPort: 80
name: traefik-proxy
- containerPort: 8080
name: traefik-ui
args:
- --web
- --kubernetes
If I understand correctly, you want to forward requests hitting the Traefik container to the Node.js application living in the same pod. Given that the application is configured statically from Traefik's perspective, you can simply mount a proper file provider configuration into the Traefik pod (presumably via a ConfigMap) pointing at the side car container.
The most simple way to achieve this (as documented) is to append the following file provider configuration directly at the bottom of Traefik's TOML configuration file:
[file]
[backends.backend.servers.server]
url = "http://127.0.0.1:9080"
[frontends.frontend]
backend = "backend"
[frontends.frontend.routes.route]
host = "machine-echo.example.com"
If you mount the TOML configuration file into the Traefik pod under a path other than the default one (/etc/traefik.toml), you will also need to pass the --configFile option in the manifest referencing the correct location of the file.
After that, any request hitting the Traefik container on port 80 with a host header of machine-echo.example.com should get forwarded to the Node.js side car container on port 9080.

Resources