Websocket connection fails for internal communication within a Kubernetes container - node.js

I am using Kubernetes to deploy my React Application. Due to the database, I am using (RethinkDB). I have to initiate a WebSocket connection between my React Application and a Node.js server that proxies to the Database instance. The connection works as intended when I deploy the Database instance, backend Node server and the react application in my local machine. However when I deploy the application in Kubernetes I receive the error
WebSocket connection to 'ws://localhost:8015/' failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED
within my react application.
For further debugging, I executed a terminal within the container and ran curl command to the Service that connects to the Database and received no errors. I also ran the backend node server to see whether it connects to the remote database and saw no issues there. Finally, I tested whether the Backend server initiates the WebSocket as intended using wscat and the WebSocket connection is working. And due to the fact that the application runs well on my local machine, leads me to believe that there the issue in the React Application connecting to the WebSocket could be caused by how Kubernetes handles Websocket connections. Any clarification on the issue is gladly appreciated.
P.S
I Have added the backend server code, the code within the React application that connects to the WebSocket and the YAML files of my React+Backend Deployment. If any more files are required please feel free to comment
backend node server
const http = require('http');
var rethinkdbWebsocketServer = require('rethinkdb-websocket-server');
const httpServer = http.createServer();
rethinkdbWebsocketServer.listen({
httpServer: httpServer,
httpPath: '/',
dbHost: remoteDB_IP,
dbPort: 28015,
unsafelyAllowAnyQuery: true
});
httpServer.listen(8015);
React code that connects to the Websocket
ReactRethinkdb.DefaultSession.connect({
host: 'localhost', // the websocket server
port: process.env.REACT_APP_WEBSOCKET_PORT,
path: '/',
secure: false,
autoReconnectDelayMs: 2000, // when disconnected, millis to wait before reconnect
});
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dashboard
labels:
app: dashboard
spec:
replicas: 1
selector:
matchLabels:
app: dashboard
template:
metadata:
labels:
app: dashboard
spec:
containers:
- name: dashboard
image: myrepor/dashbaord
imagePullPolicy: Always
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: dashboard
spec:
selector:
app: dashboard
ports:
- port: 3000
targetPort: 3000
type: LoadBalancer

so kubernetes ideas is all about running each application in separate pod.
And connecting them through services.
This way you can deploy each layer separately without down time - service routes to old pod(s) until new are up and running. (and even more important scale up the separately )
Kubernetes does some kind of name resolution inside of each k8s cluster. Not exactly sure if that is a full dns or not
thus I would recommend
separate your node server into one pod. and deploy service for it with your ws port (8015).
your react app separate pod with its own service and define node server service name as endpoint for WS.
The reason is simple - not even sure that localhost going to be resolved correctly with in the pod.

Related

MQTT Connection Fails on Google Container OS

For my setup, I'm working with a third party MQTT VerneMQ broker hosted in AWS. I have been given username/password credentials to connect over secure MQTT (port 8883) using a specific clientId. My goal (though irrelevant to the issue at hand) is merely to subscribe to a topic and redirect traffic from the topic to Google PubSub.
I wrote a simple NodeJS program to make said connection, and it works beautifully when run locally through ts-node
const client = connect(`mqtts://${process.env.MQTT_HOST}`, {
port: parseInt(process.env.MQTT_PORT, 10),
clientId: process.env.MQTT_CLIENT_ID,
username: process.env.MQTT_USERNAME,
password: process.env.MQTT_PASSWORD,
rejectUnauthorized: false,
});
client.on('error', handleError);
client.on('connect', (p) => {
console.log('connect', JSON.stringify(p));
client.subscribe({ [mqttTopic]: { qos: 0 } });
});
client.on('message', (topic, msg) => onMessageReceived(msg));
I then proceeded to Dockerize it
FROM node:lts-alpine
RUN apk update
WORKDIR /app
COPY . .
RUN npm i
EXPOSE 8883
CMD ["npm", "start"]
and that runs perfectly fine locally through docker run.
The trouble started when I loaded the image into Google's Compute Engine using their "Deploy a container image to this VM instance" option that uses a container-optimized OS image. When I checked the logs, the code was trying to reach out with a connect packet, but would always immediately close.
I thought this might be an issue with how I did the deployment, so to verify, I spun up a standard Debian VM, and upon installing Docker and running my image just like I did it locally, it worked just fine! So it's not that Docker is failing remotely.
I considered that perhaps that the deployment through Compute Engine was just weird, but it was simpler than standing up a Kubernetes cluster when I just needed the single image. Given my issues, I went ahead and spent the time to stand everything up in GKE. The logs reported the exact same messages that they reported when the image was deployed through Compute Engine. Here's the YAML:
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: test-mqtt
labels:
app: test-mqtt
spec:
replicas: 1
selector:
matchLabels:
app: test-mqtt
serviceName: test-mqtt-service
template:
metadata:
labels:
app: test-mqtt
spec:
containers:
- name: mqtt
image: us-central1-docker.pkg.dev/{GCP_PROJECT}/docker/test-mqtt
ports:
- name: mqtt-ssl
containerPort: 8883
---
apiVersion: v1
kind: Service
metadata:
name: test-mqtt-service
labels:
app: test-mqtt
spec:
ports:
- name: mqtt-ssl
port: 8883
selector:
app: test-mqtt
type: LoadBalancer
After all this, I thought for sure this was a port issue, so I checked and double checked the firewalls, both for the vNIC and internally (as suggested might be the case by Google - that didn't change anything). I can reach out over the port, run applications over the port, and it still fails when I open all the ports to the world. In order to triple check the ports, I went ahead and changed the code to reach out to https://test.mosquitto.org and I verified that I could still reach their server using port 8883. So it can't be the port.
I've come to the conclusion that some combination of OS (it worked in Debian with a manual deploy) and broker (it worked for the Mosquitto Test broker) is making this not work, but I feel like I've exhausted all possibilities.
What more can I check to make this work? I feel like it has to be something simple that I'm missing, but I've spent days on this to no avail.

Unable to access my application from outside kubernetes cluster

I have deployed the Kubernetes cluster in the Azure platform. My application is hosted in azure docker and i deployed the docker image into my Kubernetes cluster. After deploying the pods are running fine. But I was not able to access the application from outside (postman/talentapi). Nginx Ingress controller is installed inside the cluster but still not able to access the application and getting no response error. Our application uses 2 different ports (5000 & 7003). Port 7003 is used to connect the application inside docker and port 5000 used to connect outside docker. Here by im sharing my service.yaml file,
kind: Service
apiVersion: v1
metadata:
name: kyc-service
namespace: kyc
spec:
#type: ClusterIP
selector:
app: kyc-app
ports:
- name: http
protocol: TCP
port: 5000
targetPort: 5000
- name: kycapp
protocol: TCP
port: 7003
targetPort: 7003

Unable to connect with gRPC when deployed with kubernetes

I'm trying to deploy a gRPC server with kubernetes, and connect to it outside the cluster.
The relevant part of the server:
function main() {
var hello_proto = grpc.loadPackageDefinition(packageDefinition).helloworld;
var server = new grpc.Server();
server.addService(hello_proto.Greeter.service, {sayHello: sayHello});
const url = '0.0.0.0:50051'
server.bindAsync(url, grpc.ServerCredentials.createInsecure(), () => {
server.start();
console.log("Started server! on " + url);
});
}
function sayHello(call, callback) {
console.log('Hello request');
callback(null, {message: 'Hello ' + call.request.name + ' from ' + require('os').hostname()});
}
And here is the relevant part of the client:
function main() {
var target = '0.0.0.0:50051';
let pkg = grpc.loadPackageDefinition(packageDefinition);
let Greeter = pkg.helloworld["Greeter"];
var client = new Greeter(target,grpc.credentials.createInsecure());
var user = "client";
client.sayHello({name: user}, function(err, response) {
console.log('Greeting:', response.message);
});
}
When I run them manually with nodeJS, as well as when I run the server in a docker container (client is still run with node without a container) it works just fine.
The docker file with the command: docker run -it -p 50051:50051 helloapp
FROM node:carbon
# Create app directory
WORKDIR /usr/src/appnpm
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY . .
CMD npm start
However, when I'm deploying the server with kubernetes (again, the client isnt run within a container) I'm not able to connect.
The yaml file is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloapp
spec:
replicas: 1
selector:
matchLabels:
app: helloapp
strategy: {}
template:
metadata:
labels:
app: helloapp
spec:
containers:
image: isolatedsushi/helloapp
name: helloapp
ports:
- containerPort: 50051
name: helloapp
resources: {}
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: helloservice
spec:
selector:
app: helloapp
ports:
- name: grpc
port: 50051
targetPort: 50051
The deployment and the service start up just fine
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
helloservice ClusterIP 10.105.11.22 <none> 50051/TCP 17s
kubectl get pods
NAME READY STATUS RESTARTS AGE
helloapp-dbdfffb-brvdn 1/1 Running 0 45s
But when I run the client it can't reach the server.
Any ideas what I'm doing wrong?
As mentioned in comments
ServiceTypes
If you have exposed your service as ClusterIP it's visible only internally in the cluster, if you wan't to expose your service externally you have to use either nodePort or LoadBalancer.
Publishing Services (ServiceTypes)
For some parts of your application (for example, frontends) you may want to expose a Service onto an external IP address, that's outside of your cluster.
Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP.
Type values and their behaviors are:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
Related documentation about that.
Minikube
With minikube you can achieve that with minikube service command.
There is documentation about minikube service and there is an example.
grpc http/https
As mentioned here by #murgatroid99
The gRPC library does not recognize the https:// scheme for addresses, so that target name will cause it to try to resolve the wrong name. You should instead use grpc-server-xxx.com:9090 or dns:grpc-server-xxx.com:9090 or dns:///grpc-server-xxx.com:9090. More detailed information about how gRPC interprets channel target names can be found in this documentation page.
As it does not recognize https I assume it's the same for http, so it's not possible.
kubectl port-forward
Additionally as #IsolatedSushi mentioned
It also works when I portforward with the command kubectl -n hellospace port-forward svc/helloservice 8080:50051
As mentioned here
Kubectl port-forward allows you to access and interact with internal Kubernetes cluster processes from your localhost. You can use this method to investigate issues and adjust your services locally without the need to expose them beforehand.
There is an example in documentation.

How to connect Redis with service in NodeJS in K8s cluster?

I am developing an application (car-app) which uses socket.io. and now I am going to deploy it to kubernetes cluster. Then I use Redis pub/sub function to communicate.
My app structure:
Backend: NodeJS
Frontend: ReactJS
Mongodb
Then I am trying to connect Redis in NodeJS. It can be run on my localhost, but it cannot run on my GKE cluster.
(NodeJS)
const redis = require('redis');
const REDISPORT = 6379;
const subscriber = redis.createClient(REDISPORT, redis);
Error while running on GKE cluster:
Error: Redis connection to 127.0.0.1:6379 failed - connect
ECONNREFUSED 127.0.0.1:6379
I think this maybe caused by service connection and my redis deployment and service are configured below.
apiVersion: apps/v1
kind: Deployment
metadata:
name: car-redis-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: car-redis
spec:
containers:
- name: car-redis
image: redis:latest
ports:
- containerPort: 6379
name: http-port
selector:
matchLabels:
app: car-redis
apiVersion: v1
kind: Service
metadata:
name: car-redis-service
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: car-redis
type: NodePort
It works on localhost because host might be already configured with redis running, so your node is looking for 127.0.0.1:6379 and it's able to connect to one without any errors.
Coming to k8s, your deployed application is looking for redis within the same container. So, you are getting an error while doing the same.
While coming to GKE or cloud, you need to configure your node application with the particular host ip or url on which your redis application is running. As, I can see you are already running a redis in your k8 cluster, if it is the same cluster as your node application deployment you can directly connect it with the service using something like this
<service-name>.<namespace-name>.svc.cluster.local How to communicate between ns?
From your example, make sure your node app supports redis url instead of just port. And add your configs car-redis-service.default.svc.cluster.local:6379 in your node app it should work without any issues.

k8s not able to reach the database

Here is my docker image
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-alpine3.8 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY ./xyz/publish .
ENV ASPNETCORE_URLS=https://+:443;http://+80
ENTRYPOINT ["dotnet","abc/xyz.dll"]
Here is my Deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: xyzdemo
namespace: default
spec:
replicas: 1
selector:
matchLabels:
papi: web
template:
metadata:
labels:
papi: web
spec:
containers:
- name: xyzdemo-site
image: xyz.azurecr.io/abc:31018
ports:
- containerPort: 443
imagePullSecrets:
- name: secret
---
apiVersion: v1
kind: Service
metadata:
name: xyzdemo-entrypoint
namespace: default
spec:
type: LoadBalancer
selector:
papi: web
ports:
- port: 44328
targetPort: 443
Here is my appsettings file
"Server": "xyz.database.windows.net",
"Database": "pp",
"User": "ita",
"Password": "password",
using all these i deployed the application in to the k8s cluster and am able to open the application from the browser, however when i try to get the info from the database, application is getting the network related error after a while.
System.Data.SqlClient.SqlException (0x80131904): A network-related or
instance-specific error occurred while establishing a connection to
SQL Server
I tried going inside to POD and did the ls command, i can see my application setting file as well as when Cat the application settings, i can see the correct credentials and i dont know what to do and not sure why is not able to connect to the database.
So finally i tried adding the sql connections as the env variables to the pod , then it started working. when i remove those its not connecting.
Now i removed the env variables which has the sql connections then did the log on the pod.
it says can't connect to the database: 'Empty' and server: 'Empty'
not sure why is it taking the empty when it has the details inside the applicationsettings.json file.
Well, I do not see what is the config for your k8's application to connect to database. Importantly, where is your database hosted? How can papi:web connect to database?
I also suspect your service is not having appropriate port re-direction. From your service.yaml above, https port of 443 is internally mapped to 44328. What is 44328? What is listening on that port? Your application seems to have no mention of 44328. (Refer Dockerfile)
I would improvise your service.yaml to look something like this:
apiVersion: v1
kind: Service
metadata:
name: xyzdemo-entrypoint
namespace: default //This is inferred anyways
spec:
selector:
papi: web
ports:
- name: http
protocol: TCP
port: 80
targetPort: xxxx //Where your web-server is listening. (From your dockerfile, this is also 80 but it can be any valid TCP port)
- name: https
protocol: TCP
port: 443
targetPort: xxxx //https for your web-server. (From your dockerfile, this is also 443. Again, can be any TCP port)
Opening up database server to internet is not a good practise. It's a big security threat. Good pattern is to facilitate your web-server communicate to database-server via internal dns that k8's maintain (This is assuming your database server is also a container - something like kubedb. If not, you're database server will have to be available via some sort of proxy that whitelists known hosts and only allows known hosts. eg - cloudsql proxy in GCP).
Depending on how your database server is hosted, you'll have to configure your db config to allow or whitelist your containerised application (The IP you get after applying service.yaml) Only then will your k8's app be able to talk/achieve connectivity to respective db.
I suspect you need to Allow connections on the Azure SQL firewall for this to work. Using the portal would be the easiest way. You can just allow all or allow Azure services for starters (assuming your Kubernetes is inside Azure). And narrow it down later (if this is the culprit).
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-firewall-configure#use-the-azure-portal-to-manage-server-level-ip-firewall-rules

Resources