Ports not receiving fetch request in k8s - node.js

I have a k8s cluster running locally on an arm Mac. I have client and server pods. The client is a React frontend. The server is an express server connecting to a mongodb Atlas cluster.
So far the images build fine and all pods are running.
The problem is the internal port routing is not working. All I see is
GET http://localhost:5000/ net::ERR_CONNECTION_REFUSED
And the referrer policy in the networking tab suggests a CORS error:
Referrer Policy: strict-origin-when-cross-origin
I'm not sure what I need to do to get my client to fetch on a certain port? So far I have this, which works outside of k8s:
const getAllUsers = () => {
fetch("http://localhost:5000/")
.then((res) => res.text())
.then((res) => {
return setUsers(JSON.parse(res));
});
};
The server has this code to handle the request:
app.get("/", (req, res) => {
usersCollection
.find()
.toArray()
.then((results) => {
res.json(results);
})
.catch((error) => console.error(error));
});
My server cluster ip service is like this:
apiVersion: v1
kind: Service
metadata:
name: server-cluster-ip-service
spec:
type: ClusterIP
selector:
component: server
ports:
- port: 5000
targetPort: 5000
And my server deployment is like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
spec:
replicas: 3
selector:
matchLabels:
component: server
template:
metadata:
labels:
component: server
spec:
containers:
- name: server
image: mydocker/k8s-server
ports:
- containerPort: 5000
env:
- name: REDIS_HOST
value: redis-cluster-ip-service
- name: REDIS_PORT
value: "6379"
When I log out the server deployment logs I get:
> express_mongodb#1.0.0 dev /app
> nodemon server.js
[nodemon] 2.0.12
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): *.*
[nodemon] watching extensions: js,mjs,json
[nodemon] starting `node server.js`
Connected to Database
listening on 5000
Which is great - it's listening on the port I specified with
app.listen(PORT, function () {
console.log("listening on 5000");
});
There's something I'm not getting here. The first thing I want to do is make sure my server is connected to my client on port 5000 - what am I doing wrong?
EDIT: After a long time looking at CORS error fixes, maybe it isn't a CORS error? I curl the service ip with:
curl my.ip.##.##
Then try:
curl localhost:5000
But the requests time out.
I use kubectl describe service server-cluster-ip-service
and get back
Name: server-cluster-ip-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: component=server
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.###.##.##
IPs: 10.###.##.##
Port: <unset> 5000/TCP
TargetPort: 5000/TCP
Endpoints: 172.##.#.##:5000,172.##.#.##,172.##.#.##
Session Affinity: None
Events: <none>

Ideally, you should be using the service name to communicate with different services inside the cluster.
If you react server want to talk with the express server, you have to the express-service-name as host into the react service. Kubernetes will auto manage the resolution.
so for you, it will be server-cluster-ip-service
const getAllUsers = () => {
fetch("http://server-cluster-ip-service:5000/")
.then((res) => res.text())
.then((res) => {
return setUsers(JSON.parse(res));
});
};
As in a single machine or in a host all services can talk to each other on localhost the same way in Kubernetes cluster services uses the service name to solve each other.
Reference document DNS for services & POD: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
You express cluster POD or application will run on 0.0.0.0/0 and port 5000 and there will be one Kubernetes service as your created now with target port 5000.
Your client or other application internally will call this service by service name and request redirected to container (POD) of the express server.

Related

Error Connecting to ejabberd running on kubernetes from node.js

I'm trying to create a Chat application to enhance my portfolio.
For it I'm using xmpp as my messaging server. So, I'm running ejabberd on kubernetes with the following configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: ejabberd-depl
labels:
app: ejabberd
spec:
replicas: 1
selector:
matchLabels:
app: ejabberd
template:
metadata:
labels:
app: ejabberd
spec:
containers:
- name: ejabberd
image: ejabberd/ecs
ports:
- containerPort: 5280
- containerPort: 5222
- containerPort: 5269
---
apiVersion: v1
kind: Service
metadata:
name: ejabberd-srv
spec:
selector:
app: ejabberd
ports:
- name: ejabberd
protocol: TCP
port: 5222
targetPort: 5222
- name: admin
protocol: TCP
port: 5280
targetPort: 5280
- name: encrypted
protocol: TCP
port: 5269
targetPort: 5269
I'm exposing the ejabberd to my entire app using the service.
Connecting to ejabberd using package "#xmpp/client"
import {client, xml, jid} from '#xmpp/client'
const xmpp = client({
service: "wss://ejabberd-srv:5222/xmpp-websocket",
domain: "ejabberd-srv",
username: "username",
password: "password",
})
xmpp.on('online', () => {
console.log("connected to xmpp server");
})
xmpp.on('error', (err) => {
console.log("Connected but error");
})
xmpp.start().catch(() => console.log("error in xmpp start"));
When I run the app the node.js side keeps on giving me errors saying "SSL wrong version number /deps/openssl/ssl/record/ssl3_record.c:354:"
But when I check the logs of ejabberd it plainly says "Accepted connection"
I know almost nothing about kubernetes and node.js, but I have experience in ejabberd and docker, so maybe I can give some useful hint:
Looking at the configuration you showed, and assuming you use the default ejabberd config from https://github.com/processone/docker-ejabberd/blob/master/ecs/conf/ejabberd.yml#L38
In that configuration file it says that:
port 5222 is used for XMPP C2S connections
5280 for HTTP connections
and 5443 is used for HTTPS, including WebSocket, BOSH, web admin...
service: "wss://ejabberd-srv:5222/xmpp-websocket",
In your case, if your client uses WebSocket to connect to ejabberd, you should open the 5443 port in kubernetes and tell your client to use an URL like "wss://ejabberd-srv:5443/ws",

Unable to connect with gRPC when deployed with kubernetes

I'm trying to deploy a gRPC server with kubernetes, and connect to it outside the cluster.
The relevant part of the server:
function main() {
var hello_proto = grpc.loadPackageDefinition(packageDefinition).helloworld;
var server = new grpc.Server();
server.addService(hello_proto.Greeter.service, {sayHello: sayHello});
const url = '0.0.0.0:50051'
server.bindAsync(url, grpc.ServerCredentials.createInsecure(), () => {
server.start();
console.log("Started server! on " + url);
});
}
function sayHello(call, callback) {
console.log('Hello request');
callback(null, {message: 'Hello ' + call.request.name + ' from ' + require('os').hostname()});
}
And here is the relevant part of the client:
function main() {
var target = '0.0.0.0:50051';
let pkg = grpc.loadPackageDefinition(packageDefinition);
let Greeter = pkg.helloworld["Greeter"];
var client = new Greeter(target,grpc.credentials.createInsecure());
var user = "client";
client.sayHello({name: user}, function(err, response) {
console.log('Greeting:', response.message);
});
}
When I run them manually with nodeJS, as well as when I run the server in a docker container (client is still run with node without a container) it works just fine.
The docker file with the command: docker run -it -p 50051:50051 helloapp
FROM node:carbon
# Create app directory
WORKDIR /usr/src/appnpm
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY . .
CMD npm start
However, when I'm deploying the server with kubernetes (again, the client isnt run within a container) I'm not able to connect.
The yaml file is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloapp
spec:
replicas: 1
selector:
matchLabels:
app: helloapp
strategy: {}
template:
metadata:
labels:
app: helloapp
spec:
containers:
image: isolatedsushi/helloapp
name: helloapp
ports:
- containerPort: 50051
name: helloapp
resources: {}
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: helloservice
spec:
selector:
app: helloapp
ports:
- name: grpc
port: 50051
targetPort: 50051
The deployment and the service start up just fine
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
helloservice ClusterIP 10.105.11.22 <none> 50051/TCP 17s
kubectl get pods
NAME READY STATUS RESTARTS AGE
helloapp-dbdfffb-brvdn 1/1 Running 0 45s
But when I run the client it can't reach the server.
Any ideas what I'm doing wrong?
As mentioned in comments
ServiceTypes
If you have exposed your service as ClusterIP it's visible only internally in the cluster, if you wan't to expose your service externally you have to use either nodePort or LoadBalancer.
Publishing Services (ServiceTypes)
For some parts of your application (for example, frontends) you may want to expose a Service onto an external IP address, that's outside of your cluster.
Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP.
Type values and their behaviors are:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
Related documentation about that.
Minikube
With minikube you can achieve that with minikube service command.
There is documentation about minikube service and there is an example.
grpc http/https
As mentioned here by #murgatroid99
The gRPC library does not recognize the https:// scheme for addresses, so that target name will cause it to try to resolve the wrong name. You should instead use grpc-server-xxx.com:9090 or dns:grpc-server-xxx.com:9090 or dns:///grpc-server-xxx.com:9090. More detailed information about how gRPC interprets channel target names can be found in this documentation page.
As it does not recognize https I assume it's the same for http, so it's not possible.
kubectl port-forward
Additionally as #IsolatedSushi mentioned
It also works when I portforward with the command kubectl -n hellospace port-forward svc/helloservice 8080:50051
As mentioned here
Kubectl port-forward allows you to access and interact with internal Kubernetes cluster processes from your localhost. You can use this method to investigate issues and adjust your services locally without the need to expose them beforehand.
There is an example in documentation.

Websocket connection fails for internal communication within a Kubernetes container

I am using Kubernetes to deploy my React Application. Due to the database, I am using (RethinkDB). I have to initiate a WebSocket connection between my React Application and a Node.js server that proxies to the Database instance. The connection works as intended when I deploy the Database instance, backend Node server and the react application in my local machine. However when I deploy the application in Kubernetes I receive the error
WebSocket connection to 'ws://localhost:8015/' failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED
within my react application.
For further debugging, I executed a terminal within the container and ran curl command to the Service that connects to the Database and received no errors. I also ran the backend node server to see whether it connects to the remote database and saw no issues there. Finally, I tested whether the Backend server initiates the WebSocket as intended using wscat and the WebSocket connection is working. And due to the fact that the application runs well on my local machine, leads me to believe that there the issue in the React Application connecting to the WebSocket could be caused by how Kubernetes handles Websocket connections. Any clarification on the issue is gladly appreciated.
P.S
I Have added the backend server code, the code within the React application that connects to the WebSocket and the YAML files of my React+Backend Deployment. If any more files are required please feel free to comment
backend node server
const http = require('http');
var rethinkdbWebsocketServer = require('rethinkdb-websocket-server');
const httpServer = http.createServer();
rethinkdbWebsocketServer.listen({
httpServer: httpServer,
httpPath: '/',
dbHost: remoteDB_IP,
dbPort: 28015,
unsafelyAllowAnyQuery: true
});
httpServer.listen(8015);
React code that connects to the Websocket
ReactRethinkdb.DefaultSession.connect({
host: 'localhost', // the websocket server
port: process.env.REACT_APP_WEBSOCKET_PORT,
path: '/',
secure: false,
autoReconnectDelayMs: 2000, // when disconnected, millis to wait before reconnect
});
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dashboard
labels:
app: dashboard
spec:
replicas: 1
selector:
matchLabels:
app: dashboard
template:
metadata:
labels:
app: dashboard
spec:
containers:
- name: dashboard
image: myrepor/dashbaord
imagePullPolicy: Always
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: dashboard
spec:
selector:
app: dashboard
ports:
- port: 3000
targetPort: 3000
type: LoadBalancer
so kubernetes ideas is all about running each application in separate pod.
And connecting them through services.
This way you can deploy each layer separately without down time - service routes to old pod(s) until new are up and running. (and even more important scale up the separately )
Kubernetes does some kind of name resolution inside of each k8s cluster. Not exactly sure if that is a full dns or not
thus I would recommend
separate your node server into one pod. and deploy service for it with your ws port (8015).
your react app separate pod with its own service and define node server service name as endpoint for WS.
The reason is simple - not even sure that localhost going to be resolved correctly with in the pod.

Fastify not working on Docker / Kubernetes

I have very simple app that returns "Hello World" string, it works fine locally. As you will see from app code below it runs on port 4000. When I create docker image and run a container, I can't access it from localhost:4000 on my machine, but I can see that docker got to node index.js command correctly and app is running without any errors.
I also tried to deploy it to Kubernetes cluster, when I access load balancer ip I get ERR_EMPTY_RESPONSE. After inspecting this app through kubectl I can see that everything is running fine, image was downloaded and pod is running.
I'm struggling to understand what I missed and why it only works locally.
NodeJS app
import fastify from 'fastify';
const server = fastify();
server.get('/', (_request, reply) => {
reply.status(200).send("Hello World");
});
server.listen(4000, error => {
if (error) {
process.exit(1);
}
});
Dockerfile
FROM node:14.2-alpine
WORKDIR /app
COPY package.json yarn.lock /app/
RUN yarn
COPY . .
EXPOSE 4000
CMD ["node", "index.js"]
Kubernetes manifest
---
# Load balancer
apiVersion: v1
kind: Service
metadata:
name: development-actions-lb
annotations:
service.beta.kubernetes.io/do-loadbalancer-name: "development-actions-lb"
service.beta.kubernetes.io/do-loadbalancer-algorithm: "round_robin"
spec:
type: LoadBalancer
selector:
app: development-actions
ports:
- name: http
protocol: TCP
port: 80
targetPort: 4000
---
# Actions deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: development-actions
spec:
replicas: 1
selector:
matchLabels:
app: development-actions
template:
metadata:
labels:
app: development-actions
spec:
containers:
- image: registry.digitalocean.com/myapp/my-image:latest
name: development-actions
ports:
- containerPort: 4000
protocol: TCP
imagePullSecrets:
- name: registry-myapp
First when I tried your code, I try it using a local docker but the behavior is just the same, so I expect it to be ebcause of the fact that fastify by default only listen to localhost.
docker build -t development-actions:latest .
docker run -it -p 4000:4000 development-actions:latest
Inside of Docker you should mentioned explicitly '0.0.0.0' since By default fastify is istening only on the localhost 127.0.0.1 interface. To listen on all available IPv4 interfaces the example should be modified to listen on 0.0.0.0 like so I change it to the following:
const server = require('fastify')({ logger: true })
server.get('/', (_request, reply) => {
reply.status(200).send("Hello World");
});
server.listen(4000, '0.0.0.0', error => {
if (error) {
process.exit(1);
}
});
The rest should be the same. To try it locally you can do it by using:
Reference:
https://www.fastify.io/docs/latest/Getting-Started/#your-first-server

Kubernetes NodePort is not accessbile outside. Connection refused

Hi I am trying to learn kubernetes.
what I am trying to do this using minikube and following are what I did:
1.) Write a simple server using Node
2.) Write a Dockerfile for that particular Node server
3.) Create a kubernetes deployment
4.) Create a service (of type ClusterIP)
5.) Create a service (of type NodePort) to expose the container so I can access from outside (browser, curl)
But when I try to connect to the NodePort with the format of <NodePort>:<port> and `:, it gives an error Failed to connect to 192.168.39.192 port 80: Connection refused
These are the files I created as steps mentioned above (1-5).
1.) server.js - Here only I have mentioned server.js, relavent package.json exists and they work as expected when I run the server locally (without deploying it in docker), I told this in case you might ask questions if my server works correctly, yes it does :)
'use strict';
const express = require('express');
// Constants
const PORT = 8080;
const HOST = '0.0.0.0';
// App
const app = express();
app.get('/', (req, res) => {
res.send('Hello world\n');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
2.) Dockerfile
FROM node:10
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "npm", "start" ]
3.) deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-web-app
spec:
replicas: 2
selector:
matchLabels:
name: node-web-app
template:
metadata:
labels:
# you can specify any labels you want here
name: node-web-app
spec:
containers:
- name: node-web-app
# image must be the same as you built before (name:tag)
image: banuka/node-web-app
ports:
- name: http
containerPort: 8080
protocol: TCP
imagePullPolicy: Never
terminationGracePeriodSeconds: 60
4.) clusterip.yaml
kind: Service
apiVersion: v1
metadata:
labels:
# these labels can be anything
name: node-web-app-clusterip
name: node-web-app-clusterip
spec:
selector:
app: node-web-app
ports:
- protocol: TCP
port: 80
# target is the port exposed by your containers (in our example 8080)
targetPort: 8080
5.) NodePort.yaml
kind: Service
apiVersion: v1
metadata:
labels:
name: node-server-nodeport
name: node-server-nodeport
spec:
# this will make the service a NodePort service
type: NodePort
selector:
app: node-app-web
ports:
- protocol: TCP
# new -> this will be the port used to reach it from outside
# if not specified, a random port will be used from a specific range (default: 30000-32767)
nodePort: 32555
port: 80
targetPort: 8080
and when I try to curl from outside or use my web browser it gives following error:
curl: (7) Failed to connect to 192.168.39.192 port 32555: Connection refused
ps: pods and containers are also working as expected.
There are several possible reasons for this.
First: Are you using your local IP or the IP where the minikube VM is running? To verify use minikube ip.
Second: The NodePort service wants to use pods with label app: node-app-web, but your pods only have the label name: node-web-app
Just to make sure the port that you assume is used, check with minikube service list that the requested port was allocated. Check your firewall settings as well.
I had same problem always when I write wrong selectors to NodePort service spec
The Service's selector must match the Pod's label.
In your NodePort.yaml selector is app: node-app-web , while in deployment.yaml label is node-web-app.

Resources