How to connect Redis with service in NodeJS in K8s cluster? - node.js

I am developing an application (car-app) which uses socket.io. and now I am going to deploy it to kubernetes cluster. Then I use Redis pub/sub function to communicate.
My app structure:
Backend: NodeJS
Frontend: ReactJS
Mongodb
Then I am trying to connect Redis in NodeJS. It can be run on my localhost, but it cannot run on my GKE cluster.
(NodeJS)
const redis = require('redis');
const REDISPORT = 6379;
const subscriber = redis.createClient(REDISPORT, redis);
Error while running on GKE cluster:
Error: Redis connection to 127.0.0.1:6379 failed - connect
ECONNREFUSED 127.0.0.1:6379
I think this maybe caused by service connection and my redis deployment and service are configured below.
apiVersion: apps/v1
kind: Deployment
metadata:
name: car-redis-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: car-redis
spec:
containers:
- name: car-redis
image: redis:latest
ports:
- containerPort: 6379
name: http-port
selector:
matchLabels:
app: car-redis
apiVersion: v1
kind: Service
metadata:
name: car-redis-service
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: car-redis
type: NodePort

It works on localhost because host might be already configured with redis running, so your node is looking for 127.0.0.1:6379 and it's able to connect to one without any errors.
Coming to k8s, your deployed application is looking for redis within the same container. So, you are getting an error while doing the same.
While coming to GKE or cloud, you need to configure your node application with the particular host ip or url on which your redis application is running. As, I can see you are already running a redis in your k8 cluster, if it is the same cluster as your node application deployment you can directly connect it with the service using something like this
<service-name>.<namespace-name>.svc.cluster.local How to communicate between ns?
From your example, make sure your node app supports redis url instead of just port. And add your configs car-redis-service.default.svc.cluster.local:6379 in your node app it should work without any issues.

Related

Error Connecting to ejabberd running on kubernetes from node.js

I'm trying to create a Chat application to enhance my portfolio.
For it I'm using xmpp as my messaging server. So, I'm running ejabberd on kubernetes with the following configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: ejabberd-depl
labels:
app: ejabberd
spec:
replicas: 1
selector:
matchLabels:
app: ejabberd
template:
metadata:
labels:
app: ejabberd
spec:
containers:
- name: ejabberd
image: ejabberd/ecs
ports:
- containerPort: 5280
- containerPort: 5222
- containerPort: 5269
---
apiVersion: v1
kind: Service
metadata:
name: ejabberd-srv
spec:
selector:
app: ejabberd
ports:
- name: ejabberd
protocol: TCP
port: 5222
targetPort: 5222
- name: admin
protocol: TCP
port: 5280
targetPort: 5280
- name: encrypted
protocol: TCP
port: 5269
targetPort: 5269
I'm exposing the ejabberd to my entire app using the service.
Connecting to ejabberd using package "#xmpp/client"
import {client, xml, jid} from '#xmpp/client'
const xmpp = client({
service: "wss://ejabberd-srv:5222/xmpp-websocket",
domain: "ejabberd-srv",
username: "username",
password: "password",
})
xmpp.on('online', () => {
console.log("connected to xmpp server");
})
xmpp.on('error', (err) => {
console.log("Connected but error");
})
xmpp.start().catch(() => console.log("error in xmpp start"));
When I run the app the node.js side keeps on giving me errors saying "SSL wrong version number /deps/openssl/ssl/record/ssl3_record.c:354:"
But when I check the logs of ejabberd it plainly says "Accepted connection"
I know almost nothing about kubernetes and node.js, but I have experience in ejabberd and docker, so maybe I can give some useful hint:
Looking at the configuration you showed, and assuming you use the default ejabberd config from https://github.com/processone/docker-ejabberd/blob/master/ecs/conf/ejabberd.yml#L38
In that configuration file it says that:
port 5222 is used for XMPP C2S connections
5280 for HTTP connections
and 5443 is used for HTTPS, including WebSocket, BOSH, web admin...
service: "wss://ejabberd-srv:5222/xmpp-websocket",
In your case, if your client uses WebSocket to connect to ejabberd, you should open the 5443 port in kubernetes and tell your client to use an URL like "wss://ejabberd-srv:5443/ws",

Unable to access my application from outside kubernetes cluster

I have deployed the Kubernetes cluster in the Azure platform. My application is hosted in azure docker and i deployed the docker image into my Kubernetes cluster. After deploying the pods are running fine. But I was not able to access the application from outside (postman/talentapi). Nginx Ingress controller is installed inside the cluster but still not able to access the application and getting no response error. Our application uses 2 different ports (5000 & 7003). Port 7003 is used to connect the application inside docker and port 5000 used to connect outside docker. Here by im sharing my service.yaml file,
kind: Service
apiVersion: v1
metadata:
name: kyc-service
namespace: kyc
spec:
#type: ClusterIP
selector:
app: kyc-app
ports:
- name: http
protocol: TCP
port: 5000
targetPort: 5000
- name: kycapp
protocol: TCP
port: 7003
targetPort: 7003

Kafka connect - Failed to connect to localhost port 8083: Connection refused

I have an application that relies on a kafka service.
With Kafka connect, I'm getting an error when trying to curl localhost:8083, on the Linux VM that's running the kubernetes pod for Kafka connect.
curl -v localhost:8083 gives:
Rebuilt URL to: localhost:8083/
Trying 127.0.0.1...
connect to 127.0.0.1 port 8083 failed: Connection refused
Failed to connect to localhost port 8083: Connection refused
Closing connection 0
curl: (7) Failed to connect to localhost port 8083: Connection refused
kubectl get po -o wide for my kubernetes namespace gives:
When I check open ports using sudo lsof -i -P -n | grep LISTEN I don't see 8083 listed. The kafka connect pod is running and there's nothing suspicious in the logs for the pod.
There's a kubernetes manifest that I think was probably used to set up the Kafka connect service, these are the relevant parts. I'd really appreciate any advice about how to figure out why I can't curl localhost:8083
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-connect
namespace: my-namespace
spec:
...
template:
metadata:
labels:
app: connect
spec:
containers:
- name: kafka-connect
image: confluentinc/cp-kafka-connect:3.0.1
ports:
- containerPort: 8083
env:
- name: CONNECT_REST_PORT
value: "8083"
- name: CONNECT_REST_ADVERTISED_HOST_NAME
value: "kafka-connect"
volumes:
- name: connect-plugins
persistentVolumeClaim:
claimName: pvc-connect-plugin
- name: connect-helpers
secret:
secretName: my-kafka-connect-config
---
apiVersion: v1
kind: Service
metadata:
name: kafka-connect
namespace: my-namespace
labels:
app: connect
spec:
ports:
- port: 8083
selector:
app: connect
You can't connect to a service running inside your cluster, from outside your cluster, without a little bit of tinkering.
You have three possible solutions:
Use a service with type NodePort or LoadBalancer to make the service reachable outside the cluster.
See the services and kubectl expose documentation.
Be aware, depending on your environment, this may expose the service to the internet.
Access using Proxy Verb: (see here)
This only works for HTTP/HTTPS. Use this if your service is not secure
enough to be exposed to the internet.
Access from pod running inside your cluster.
As you have noticed in the comments, you can curl from inside the pod. You can also do this from any other pod running in the same cluster. Pods can communicate with each other without any additional configuration.
Why can I not curl 8083 when I ssh onto the VM?
Pods/services are not reachable from outside the cluster, if not exposed using aforementioned methods (point 1 or 2).
Why isn't the port exposed on the host VM that has the pods?
It's not exposed on your VM, it's exposed inside your cluster.
I would strongly recommend going through Cluster Networking documentation to learn more.

Websocket connection fails for internal communication within a Kubernetes container

I am using Kubernetes to deploy my React Application. Due to the database, I am using (RethinkDB). I have to initiate a WebSocket connection between my React Application and a Node.js server that proxies to the Database instance. The connection works as intended when I deploy the Database instance, backend Node server and the react application in my local machine. However when I deploy the application in Kubernetes I receive the error
WebSocket connection to 'ws://localhost:8015/' failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED
within my react application.
For further debugging, I executed a terminal within the container and ran curl command to the Service that connects to the Database and received no errors. I also ran the backend node server to see whether it connects to the remote database and saw no issues there. Finally, I tested whether the Backend server initiates the WebSocket as intended using wscat and the WebSocket connection is working. And due to the fact that the application runs well on my local machine, leads me to believe that there the issue in the React Application connecting to the WebSocket could be caused by how Kubernetes handles Websocket connections. Any clarification on the issue is gladly appreciated.
P.S
I Have added the backend server code, the code within the React application that connects to the WebSocket and the YAML files of my React+Backend Deployment. If any more files are required please feel free to comment
backend node server
const http = require('http');
var rethinkdbWebsocketServer = require('rethinkdb-websocket-server');
const httpServer = http.createServer();
rethinkdbWebsocketServer.listen({
httpServer: httpServer,
httpPath: '/',
dbHost: remoteDB_IP,
dbPort: 28015,
unsafelyAllowAnyQuery: true
});
httpServer.listen(8015);
React code that connects to the Websocket
ReactRethinkdb.DefaultSession.connect({
host: 'localhost', // the websocket server
port: process.env.REACT_APP_WEBSOCKET_PORT,
path: '/',
secure: false,
autoReconnectDelayMs: 2000, // when disconnected, millis to wait before reconnect
});
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dashboard
labels:
app: dashboard
spec:
replicas: 1
selector:
matchLabels:
app: dashboard
template:
metadata:
labels:
app: dashboard
spec:
containers:
- name: dashboard
image: myrepor/dashbaord
imagePullPolicy: Always
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: dashboard
spec:
selector:
app: dashboard
ports:
- port: 3000
targetPort: 3000
type: LoadBalancer
so kubernetes ideas is all about running each application in separate pod.
And connecting them through services.
This way you can deploy each layer separately without down time - service routes to old pod(s) until new are up and running. (and even more important scale up the separately )
Kubernetes does some kind of name resolution inside of each k8s cluster. Not exactly sure if that is a full dns or not
thus I would recommend
separate your node server into one pod. and deploy service for it with your ws port (8015).
your react app separate pod with its own service and define node server service name as endpoint for WS.
The reason is simple - not even sure that localhost going to be resolved correctly with in the pod.

k8s not able to reach the database

Here is my docker image
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-alpine3.8 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY ./xyz/publish .
ENV ASPNETCORE_URLS=https://+:443;http://+80
ENTRYPOINT ["dotnet","abc/xyz.dll"]
Here is my Deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: xyzdemo
namespace: default
spec:
replicas: 1
selector:
matchLabels:
papi: web
template:
metadata:
labels:
papi: web
spec:
containers:
- name: xyzdemo-site
image: xyz.azurecr.io/abc:31018
ports:
- containerPort: 443
imagePullSecrets:
- name: secret
---
apiVersion: v1
kind: Service
metadata:
name: xyzdemo-entrypoint
namespace: default
spec:
type: LoadBalancer
selector:
papi: web
ports:
- port: 44328
targetPort: 443
Here is my appsettings file
"Server": "xyz.database.windows.net",
"Database": "pp",
"User": "ita",
"Password": "password",
using all these i deployed the application in to the k8s cluster and am able to open the application from the browser, however when i try to get the info from the database, application is getting the network related error after a while.
System.Data.SqlClient.SqlException (0x80131904): A network-related or
instance-specific error occurred while establishing a connection to
SQL Server
I tried going inside to POD and did the ls command, i can see my application setting file as well as when Cat the application settings, i can see the correct credentials and i dont know what to do and not sure why is not able to connect to the database.
So finally i tried adding the sql connections as the env variables to the pod , then it started working. when i remove those its not connecting.
Now i removed the env variables which has the sql connections then did the log on the pod.
it says can't connect to the database: 'Empty' and server: 'Empty'
not sure why is it taking the empty when it has the details inside the applicationsettings.json file.
Well, I do not see what is the config for your k8's application to connect to database. Importantly, where is your database hosted? How can papi:web connect to database?
I also suspect your service is not having appropriate port re-direction. From your service.yaml above, https port of 443 is internally mapped to 44328. What is 44328? What is listening on that port? Your application seems to have no mention of 44328. (Refer Dockerfile)
I would improvise your service.yaml to look something like this:
apiVersion: v1
kind: Service
metadata:
name: xyzdemo-entrypoint
namespace: default //This is inferred anyways
spec:
selector:
papi: web
ports:
- name: http
protocol: TCP
port: 80
targetPort: xxxx //Where your web-server is listening. (From your dockerfile, this is also 80 but it can be any valid TCP port)
- name: https
protocol: TCP
port: 443
targetPort: xxxx //https for your web-server. (From your dockerfile, this is also 443. Again, can be any TCP port)
Opening up database server to internet is not a good practise. It's a big security threat. Good pattern is to facilitate your web-server communicate to database-server via internal dns that k8's maintain (This is assuming your database server is also a container - something like kubedb. If not, you're database server will have to be available via some sort of proxy that whitelists known hosts and only allows known hosts. eg - cloudsql proxy in GCP).
Depending on how your database server is hosted, you'll have to configure your db config to allow or whitelist your containerised application (The IP you get after applying service.yaml) Only then will your k8's app be able to talk/achieve connectivity to respective db.
I suspect you need to Allow connections on the Azure SQL firewall for this to work. Using the portal would be the easiest way. You can just allow all or allow Azure services for starters (assuming your Kubernetes is inside Azure). And narrow it down later (if this is the culprit).
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-firewall-configure#use-the-azure-portal-to-manage-server-level-ip-firewall-rules

Resources