Error Connecting to ejabberd running on kubernetes from node.js - node.js

I'm trying to create a Chat application to enhance my portfolio.
For it I'm using xmpp as my messaging server. So, I'm running ejabberd on kubernetes with the following configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: ejabberd-depl
labels:
app: ejabberd
spec:
replicas: 1
selector:
matchLabels:
app: ejabberd
template:
metadata:
labels:
app: ejabberd
spec:
containers:
- name: ejabberd
image: ejabberd/ecs
ports:
- containerPort: 5280
- containerPort: 5222
- containerPort: 5269
---
apiVersion: v1
kind: Service
metadata:
name: ejabberd-srv
spec:
selector:
app: ejabberd
ports:
- name: ejabberd
protocol: TCP
port: 5222
targetPort: 5222
- name: admin
protocol: TCP
port: 5280
targetPort: 5280
- name: encrypted
protocol: TCP
port: 5269
targetPort: 5269
I'm exposing the ejabberd to my entire app using the service.
Connecting to ejabberd using package "#xmpp/client"
import {client, xml, jid} from '#xmpp/client'
const xmpp = client({
service: "wss://ejabberd-srv:5222/xmpp-websocket",
domain: "ejabberd-srv",
username: "username",
password: "password",
})
xmpp.on('online', () => {
console.log("connected to xmpp server");
})
xmpp.on('error', (err) => {
console.log("Connected but error");
})
xmpp.start().catch(() => console.log("error in xmpp start"));
When I run the app the node.js side keeps on giving me errors saying "SSL wrong version number /deps/openssl/ssl/record/ssl3_record.c:354:"
But when I check the logs of ejabberd it plainly says "Accepted connection"

I know almost nothing about kubernetes and node.js, but I have experience in ejabberd and docker, so maybe I can give some useful hint:
Looking at the configuration you showed, and assuming you use the default ejabberd config from https://github.com/processone/docker-ejabberd/blob/master/ecs/conf/ejabberd.yml#L38
In that configuration file it says that:
port 5222 is used for XMPP C2S connections
5280 for HTTP connections
and 5443 is used for HTTPS, including WebSocket, BOSH, web admin...
service: "wss://ejabberd-srv:5222/xmpp-websocket",
In your case, if your client uses WebSocket to connect to ejabberd, you should open the 5443 port in kubernetes and tell your client to use an URL like "wss://ejabberd-srv:5443/ws",

Related

How to connect Redis with service in NodeJS in K8s cluster?

I am developing an application (car-app) which uses socket.io. and now I am going to deploy it to kubernetes cluster. Then I use Redis pub/sub function to communicate.
My app structure:
Backend: NodeJS
Frontend: ReactJS
Mongodb
Then I am trying to connect Redis in NodeJS. It can be run on my localhost, but it cannot run on my GKE cluster.
(NodeJS)
const redis = require('redis');
const REDISPORT = 6379;
const subscriber = redis.createClient(REDISPORT, redis);
Error while running on GKE cluster:
Error: Redis connection to 127.0.0.1:6379 failed - connect
ECONNREFUSED 127.0.0.1:6379
I think this maybe caused by service connection and my redis deployment and service are configured below.
apiVersion: apps/v1
kind: Deployment
metadata:
name: car-redis-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: car-redis
spec:
containers:
- name: car-redis
image: redis:latest
ports:
- containerPort: 6379
name: http-port
selector:
matchLabels:
app: car-redis
apiVersion: v1
kind: Service
metadata:
name: car-redis-service
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: car-redis
type: NodePort
It works on localhost because host might be already configured with redis running, so your node is looking for 127.0.0.1:6379 and it's able to connect to one without any errors.
Coming to k8s, your deployed application is looking for redis within the same container. So, you are getting an error while doing the same.
While coming to GKE or cloud, you need to configure your node application with the particular host ip or url on which your redis application is running. As, I can see you are already running a redis in your k8 cluster, if it is the same cluster as your node application deployment you can directly connect it with the service using something like this
<service-name>.<namespace-name>.svc.cluster.local How to communicate between ns?
From your example, make sure your node app supports redis url instead of just port. And add your configs car-redis-service.default.svc.cluster.local:6379 in your node app it should work without any issues.

Azure AKS External Load Balancer Not Connecting to POD

I am trying to create a multicontainer pod for a simple demo. I have an app that is build in docker containers. There are 3 containers
1 - redis server
1 - node/express microservice
2 - node/express/react front end
All 3 containers are deployed successfully and running.
I have created a public load balancer, which is running without any errors.
I cannot connect to the front end from the public ip.
I have also run tcpdump in the frontend container and there is no traffic getting in.
Here is my yaml file used to create the deployment and service
apiVersion: apps/v1
kind: Deployment
metadata:
name: mydemoapp
spec:
replicas: 1
selector:
matchLabels:
app: mydemoapp
template:
metadata:
labels:
app: mydemoapp
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: microservices-web
image: mydemocr.azurecr.io/microservices_web:v1
ports:
- containerPort: 3001
- name: redislabs-rejson
image: mydemocr.azurecr.io/redislabs-rejson:v1
ports:
- containerPort: 6379
- name: mydemoappwebtest
image: mydemocr.azurecr.io/jsonformwebtest:v1
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: mydemoappservice
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 3000
selector:
app: mydemoapp
This is what a describe of my service looks like :
Name: mydemoappservice
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"mydemoappservice","namespace":"default"},"spec":{"ports":[{"p...
Selector: app=mydemoapp
Type: LoadBalancer
IP: 10.0.104.159
LoadBalancer Ingress: 20.49.172.10
Port: <unset> 80/TCP
TargetPort: 3000/TCP
NodePort: <unset> 31990/TCP
Endpoints: 10.244.0.17:3000
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 24m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 24m service-controller Ensured load balancer
One more weirdness is that when I run the docker container from the front end I can get a shell and run curl localhost:3000 and get some output but when I do it in the az container I get the following response after some delay
curl: (52) Empty reply from server
As to why this container works on my machine and not in azure is another layer to the mystery.
Referring from docs here the container need to listen on 0.0.0.0 instead of 127.0.0.1 because
any port which is listening on the default 0.0.0.0 address inside a
container will be accessible from the network
.

ERR_NAME_NOT_RESOLVED:Angular pod not communicating with python backend in Kubernetes

I have deployed angular frontend and python backend in kubernetes via microk8s as separate pods and they are running. I have given backend url as 'http://backend-service.default.svc.cluster.local:30007' in my angular file in order to link frontend with backend. But this is raising ERR_NAME_NOT_RESOLVED. Can someone help me in understanding the issue?
Also, I have a config file which specifies the ip's ports and other configurations in my backend. Do I need to make any changes(value of database host?, flask host?, ports? ) to that file before deploying t to kubernetes?
Shown below is my deployment and service files of angular and backend.
apiVersion: v1
kind: Service
metadata:
name: angular-service
spec:
type: NodePort
selector:
app: angular
ports:
- protocol: TCP
nodePort: 30042
targetPort: 4200
port: 4200
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: angular-deployment
labels:
name: angular
spec:
replicas: 1
selector:
matchLabels:
name: angular
template:
metadata:
labels:
name: angular
spec:
containers:
- name: angular
image: angular:local
ports:
- containerPort: 4200
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
type:ClusterIP
selector:
name: backend
ports:
- protocol: TCP
targetPort: 7000
port: 7000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-deployment
labels:
name: backend
spec:
replicas: 1
selector:
matchLabels:
name: backend
template:
metadata:
labels:
name: backend
spec:
containers:
- name: backend
image: flask:local
ports:
- containerPort: 7000
Is your cluster in a healthy state ? DNS are resolved by object coredns in kube-system namespace.
In a classic way your angular app should show up your API Url in your browser so they must exposed and public. It is not your case and I have huge doubts about this.
Expose us your app architecture?
Moreover if you expose your service though NodePort you must not use it for internal access because you never know the node you will access.
When exose a service your apps need to use the port attribute (not the nodeport) to access pod generated in backend.

Not able to connect to an AKS Service Endpoint

Following on from my question here, I now have the issue that I am unable to connect to the external endpoint. My YAML file is here:
apiVersion: v1
kind: Pod
spec:
containers:
- name: dockertest20190205080020
image: dockertest20190205080020.azurecr.io/dockertest
ports:
- containerPort: 443
metadata:
name: my-test
labels:
app: app-label
---
kind: Service
apiVersion: v1
metadata:
name: test-service
spec:
selector:
app: app-label
type: LoadBalancer
ports:
- protocol: TCP
port: 443
I can now see an external IP when I issue the command:
kubectl get service test-service --watch
However, if I try to connect to that IP I get a timeout exception. I've tried running the dashboard, and it says everything is running fine. What I can do next to diagnose this issue?
in this case, the problem was solved by exposing container on port 80 and routing from external port 6666 to it.

How do I know which pod processes the request?

I used this Deployment.yml to create pods.
apiVersion: apps/v1
kind: Deployment
metadata:
name: price-calculation-deployment
labels:
app: price-calculation
spec:
replicas: 2
selector:
matchLabels:
app: price-calculation
template:
metadata:
labels:
app: price-calculation
spec:
containers:
- name: price-calculation
image: ${somewhere}/price-calculation:latest
ports:
- containerPort: 80
protocol: TCP
- name: pc-mongodb
image: mongo:latest
ports:
- containerPort: 27017
protocol: TCP
imagePullSecrets:
- name: acr-auth
And, later on, I used this Service.yml to expose port to external.
apiVersion: v1
kind: Service
metadata:
name: price-calculation-service
spec:
type: LoadBalancer
ports:
- port: 5004
targetPort: 80
protocol: TCP
selector:
app: price-calculation
Finally, both are working now. Good.
As I configured the LoadBalancer in the Service.yml, there shall be a LoadBalancer dispatches requests to the 2 replicas/pods.
Now, I want to know which pod takes the request and how do I know
that?
Thanks!!!
well, the easiest way - make pods write their identity in the response, that way you will know which pod responds. Another way - implement distributed tracing with Zipkin\Jaeger. That will give you deep insights into networking flows.
I believe kubernetes doesnt offer any sort of built-in network tracing.
Append pod name to the response that gets rendered on the user's browser. That is how you know which pod is processing the request
You may view pod logs to see the requests made to the pod:
kubectl logs my-pod # dump pod logs (stdout)
kubectl logs my-pod -c my-container # dump pod container logs (stdout, multi-container case)
Or add to the response in the application itself, for instance in Nodejs app it might look like this:
const http = require('http');
const os = require('os');
var handler = function(request, response) {
response.writeHead(200);
response.end("You have hit " + os.hostname() + "\n");
};
var app = http.createServer(handler);
app.listen(8080);
Then you can use curl to test out your service and get a response:
Request:
curl http://serviceIp:servicePort
Response:
You have hit podName
Depending on the app’s programming language, just find a module/library that provides a utility method to get a hostname and you’ll be good to go to give it back in the response for debugging purposes.

Resources