I'm trying to connect to a socket that is inside a container and deployed on Kubernetes.
Locally everything works fine but when deployed it throws an error on connect. I tried with different options but with no success.
Client code
const ENDPOINT = "https://traveling.dev/api/chat"; // this will go to the endpoint of service where socket is running
const chatSocket = io(ENDPOINT, {
rejectUnauthorized: false,
forceNew: true,
secure: false,
});
chatSocket.on("connect_error", (err) => {
console.log(err);
console.log(`connect_error due to ${err.message}`);
});
console.log("CS", chatSocket);
Server code
const app = express();
app.set("trust proxy", true);
app.use(cors());
const server = http.createServer(app);
const io = new Server(server, {
cors: {
origin: "*",
methods: ["*"],
allowedHeaders: ["*"],
},
});
io.on("connection", (socket) => {
console.log("Socket succesfully connected with id: " + socket.id);
});
const start = async () => {
server.listen(3000, () => {
console.log("Started");
});
};
start();
The thing is code is irrelevant here cause locally it all works fine but I posted it anyways.
What can cause this while containerizing it and putting it on Kubernetes?
And the console log just says server error
Error: server error
at Socket.onPacket (socket.js:397)
at XHR.push../node_modules/component-emitter/index.js.Emitter.emit (index.js:145)
at XHR.onPacket (transport.js:107)
at callback (polling.js:98)
at Array.forEach (<anonymous>)
at XHR.onData (polling.js:102)
at Request.push../node_modules/component-emitter/index.js.Emitter.emit (index.js:145)
at Request.onData (polling-xhr.js:232)
at Request.onLoad (polling-xhr.js:283)
at XMLHttpRequest.xhr.onreadystatechange (polling-xhr.js:187)
Does anyone have any suggestions on what may cause this and how to fix it?
Also, any idea on how to get more information about the error would be appreciated.
This is the YAML file that creates the Service and a Pod.
apiVersion: apps/v1
kind: Deployment
metadata:
name: chat-depl
spec:
replicas: 1
selector:
matchLabels:
app: chat
template:
metadata:
labels:
app: chat
spec:
containers:
- name: chat
image: us.gcr.io/forward-emitter-321609/chat-service
---
apiVersion: v1
kind: Service
metadata:
name: chat-srv
spec:
selector:
app: chat
ports:
- name: chat
protocol: TCP
port: 3000
targetPort: 3000
I'm using a loadbalancer on GKE with nginx which IP address is mapped to traveling.dev
This is how my ingress routing service config looks like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Access-Control-Allow-Origin: $http_origin";
spec:
rules:
- host: traveling.dev
http:
paths:
- path: /api/chat/?(.*)
backend:
serviceName: chat-srv
servicePort: 3000
- path: /?(.*)
backend:
serviceName: client-srv
servicePort: 3000
Thanks!
Nginx Ingress by default support WebSocket proxying, but you need to configure it.
For this you need to add annotation custom configuration snippet.
You can refer this already answered stackoverflow question.
Nginx ingress controller websocket support
Related
I'm trying to run Socket IO with multiple nodes using Socket IO mongodb Adapter. But I'm receiving 400 Bad Request when I'm deploying on my Kubernetes Cluster.Both my api and frontend are on the same domain. The service is working fine in Firefox but it is failing in Google chrome and edge.
Browser Error Image
Server
const server = app.listen(port, () => {
console.log(`Server running on port ${port}`);
});
const io = require("socket.io")(server, {
path: "/appnotification/socket.io",
cors: {
origin: ["http://localhost:3000", process.env.SETTYL_LIVE],
methods: ["GET", "POST"],
transports: ["websocket", "polling"],
credentials: true,
}
});
mongoConnect(async () => {
const client = await getDbClient();
const mongoCollection = client
.db("notificationengine_prod")
.collection("socket_events");
await mongoCollection.createIndex(
{ createdAt: 1 },
{ expireAfterSeconds: 3600, background: true }
);
io.adapter(
createAdapter(mongoCollection, {
addCreatedAtField: true,
})
);
io.on("connect", (socket) => sockets(socket, io));
processConsumer(io);
});
Client
const socket = io.connect(process.env.SOCKET_SERVER_URL, {
path: "/appnotification/socket.io",
withCredentials: true,
});
Kubernetes Ingres
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/session-cookie-path: /?(.*)
nginx.ingress.kubernetes.io/session-cookie-samesite: None
nginx.ingress.kubernetes.io/session-cookie-secure: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/websocket-services: notificationhandlerconsumer
spec:
rules:
- host: sample.golive.com
http:
paths:
- backend:
service:
name: notificationhandlerconsumer
port:
number: 8000
path: /appnotification/?(.*)
pathType: Prefix
Socket Deployment Yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: "notificationhandlerconsumer"
spec:
replicas: 3
selector:
matchLabels:
app: "notificationhandlerconsumer"
template:
metadata:
labels:
app: "notificationhandlerconsumer"
spec:
containers:
- name: "notificationhandlerconsumer"
image: "sample.azurecr.io/notificationhandlerconsumer"
ports:
- containerPort: 8000
------
apiVersion: v1
kind: Service
metadata:
name: "notificationhandlerconsumer"
labels:
app: "notificationhandlerconsumer"
spec:
type: ClusterIP
ports:
- port: 8000
targetPort: 8000
protocol: TCP
name: http
selector:
app: "notificationhandlerconsumer"
Apologies in advance for such a long question, I just want to make sure I cover everything...
I have a react application that is supposed to connect to a socket being run in a service that I have deployed to kubernetes. The service runs and works fine. I am able to make requests without any issue but I cannot connect to the websocket running in the same service.
I am able to connect to the websocket when I run the service locally and use the locahost uri.
My express service's server.ts file looks like:
import "dotenv/config";
import * as packageJson from "./package.json"
import service from "./lib/service";
const io = require("socket.io");
const PORT = process.env.PORT;
const server = service.listen(PORT, () => {
console.info(`Server up and running on ${PORT}...`);
console.info(`Environment = ${process.env.NODE_ENV}...`);
console.info(`Service Version = ${packageJson.version}...`);
});
export const socket = io(server, {
cors: {
origin: process.env.ACCESS_CONTROL_ALLOW_ORIGIN,
methods: ["GET", "POST"]
}
});
socket.on('connection', function(skt) {
console.log('User Socket Connected');
socket.on("disconnect", () => console.log(`${skt.id} User disconnected.`));
});
export default service;
When I run this, PORT is set to 8088, and access-control-allow-origin is set to *. And note that I'm using a rabbitmq cluster that is deployed to Kubernetes, it is the same uri for the rabbit connection when I run locally. Rabbitmq is NOT running on my local machine, so I know it's not an issue with my rabbit deployment, it has to be something I'm doing wrong in connecting to the socket.
When I run the service locally, I'm able to connect in the react application with the following:
const io = require("socket.io-client");
const socket = io("ws://localhost:8088", { path: "/socket.io" });
And I see the "User Socket Connected" message and it all works as I expect.
When I deploy the service to Kubernetes though, I'm having some issues figuring out how to connect to the socket.
My Kubernetes Service:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8088
selector:
app: my-service
My deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service
spec:
replicas: 2
selector:
matchLabels:
app: my-service
template:
metadata:
labels:
app: my-service
spec:
containers:
- name: project
image: my-private-registry.com
ports:
- containerPort: 8088
imagePullSecrets:
- name: mySecret
And finally, my ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-service-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/enable-cors: "true" // Just added this to see if it helped
nginx.ingress.kubernetes.io/cors-allow-origin: "*" // Just added this to see if it helped
nginx.ingress.kubernetes.io/cors-allow-methods: PUT, GET, POST, OPTIONS, DELETE, PATCH // Just added this to see if it helped
spec:
tls:
- hosts:
- my.host.com
secretName: my-service-tls
rules:
- host: "my.host.com"
http:
paths:
- pathType: Prefix
path: "/project"
backend:
service:
name: my-service
port:
number: 80
I can connect to the service fine and get data, post data, etc. but I cannot connect to the websocket, I get either 404 or cors errors.
Since the service is running on my.host.com/project, I assume that the socket is at the same uri. So I try to connect with:
const socket = io("ws://my.host.com", { path: "/project/socket.io" });
and also using wss://
const socket = io("wss://my.host.com", { path: "/project/socket.io" });
and I have an error being logged in the console:
socket.on("connect_error", (err) => {
console.log(`connect_error due to ${err.message}`);
});
both result in
polling-xhr.js?d33e:198 GET https://my.host.com/project/?EIO=4&transport=polling&t=NjWQ8Tc 404
websocket.ts?25e3:14 connect_error due to xhr poll error
I have tried all of the following and none of them work:
const socket = io("ws://my.host.com", { path: "/socket.io" });
const socket = io("wss://my.host.com", { path: "/socket.io" });
const socket = io("ws://my.host.com", { path: "/project" });
const socket = io("wss://my.host.com", { path: "/project" });
const socket = io("ws://my.host.com", { path: "/" });
const socket = io("wss://my.host.com", { path: "/" });
const socket = io("ws://my.host.com");
const socket = io("wss://my.host.com");
Again, this works when the service is run locally, so I must have missed something and any help would be extremely appreciated.
Is there a way to go on the Kubernetes pod and find where rabbit is being broadcast to?
In case somebody stumbles on this in the future and wants to know how to fix it, it turns out it was a really dumb mistake on my part..
In:
export const socket = io(server, {
cors: {
origin: process.env.ACCESS_CONTROL_ALLOW_ORIGIN,
methods: ["GET", "POST"]
},
});
I just needed to add path: "/project/socket.io" to the socket options, which makes sense.
And then if anybody happens to run into the issue that followed, I was getting a 400 error on the post to the websocket polling so I set transports: [ "websocket" ] in my socket.io-client options and that seemed to fix it. The socket is now working and I can finally move on!
Unable to connect to mongodb atlas with kubernetes pod. Tried almost everything available on internet, but no luck.
yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: weare-auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: weare-auth
template:
metadata:
labels:
app: weare-auth
spec:
containers:
- name: weare-auth
image: <docker-username>/weare-auth
env:
- name: PORT
value: "3001"
restartPolicy: Always
dnsPolicy: Default
---
apiVersion: v1
kind: Service
metadata:
name: weare-auth-srv
spec:
selector:
app: weare-auth
ports:
- name: weare-auth
protocol: TCP
port: 3001
targetPort: 3001
and here's my express code
import mongoose from "mongoose";
import { app } from "./app";
const start = async () => {
try {
await mongoose.connect(
"mongodb://<username>:<password>#test-clus.wpg0x.mongodb.net/auth",
{
useNewUrlParser: true,
useUnifiedTopology: true,
useCreateIndex: true,
}
);
console.log("Connected to mongodb");
const PORT = process.env.PORT || 3001;
app.listen(PORT, () => {
console.log(`Listening on port ${PORT}`);
});
} catch (err) {
console.error(err);
}
};
start();
Here are the logs
Find the screenshot of error logs here
I've masked the credentials. Also I am able to connect to mongodb atlas cluster via shell and Robo3T. Also tried setting up the dnsPolicy like it was mentioned in on the post, but no luck.
Any idea what am I missing here?
I am creating a proxy server with node js and wants to proxy request coming to one pod to other pods in single node cluster of kubernetes. This is my node js code below ,,,,
const express = require("express");
const httpProxy = require("express-http-proxy");
const app = express();
const serviceone = httpProxy("serviceone:3000"); // I am using service names here but its not working
// Authentication
app.use((req, res, next) => {
// TODO: my authentication logic
console.log("Always.....");
next();
});
app.get("/", (req, res, next) => {
res.json({ message: "Api Gateway Working" });
});
// Proxy request
app.get("/:data/api", (req, res, next) => {
console.log("Request Recieved");
serviceone(req, res, next);
});
app.listen(5000, () => {
console.log("Api Gateway Running");
});
And these are my services.yml files
apiVersion: apps/v1
kind: Deployment
metadata:
name: serviceone
labels:
app: serviceone
spec:
replicas: 1
selector:
matchLabels:
app: serviceone
template:
metadata:
labels:
app: serviceone
spec:
containers:
- name: serviceone
image: swa/serviceone
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: serviceone
spec:
selector:
app: serviceone
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 31000
type: LoadBalancer
What name should I use in http proxy so that it can proxy requests?
I have tried serviceone:3000, http://serviceone:3000, http://localhost:3000 (won't work for different pods). Any help would be really appreciated
Edit - Node js apigateway pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: apigateway
labels:
app: apigateway
spec:
replicas: 1
selector:
matchLabels:
app: apigateway
template:
metadata:
labels:
app: apigateway
spec:
containers:
- name: apigateway
image: swa/apigateway3
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: apigateway
spec:
selector:
app: apigateway
ports:
- protocol: TCP
port: 5000
targetPort: 5000
nodePort: 31005
type: LoadBalancer
In my node js application I changed urls to
const serviceone = httpProxy('serviceone.default.svc.cluster.local:3000');
by following this link https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services
but still no luck
I am getting the error
Error: getaddrinfo EAI_AGAIN servicethree.sock-shop.svc.cluster.local
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:66:26)
I have a kubernetes cluster and an express server serving a SPA. Currently if I hit the http version of my website I do not redirect to https, but I would like to.
This is what I've tried -
import express from "express";
const PORT = 3000;
const path = require("path");
const app = express();
const router = express.Router();
const forceHttps = function(req, res, next) {
const xfp =
req.headers["X-Forwarded-Proto"] || req.headers["x-forwarded-proto"];
if (xfp === "http") {
console.log("host name");
console.log(req.hostname);
console.log(req.url);
const redirectTo = `https:\/\/${req.hostname}${req.url}`;
res.redirect(301, redirectTo);
} else {
next();
}
};
app.get("/*", forceHttps);
// root (/) should always serve our server rendered page
// other static resources should be served as they are
const root = path.resolve(__dirname, "..", "build");
app.use(express.static(root, { maxAge: "30d" }));
app.get("/*", function(req, res, next) {
if (
req.method === "GET" &&
req.accepts("html") &&
!req.is("json") &&
!req.path.includes(".")
) {
res.sendFile("index.html", { root });
} else {
next();
}
});
// tell the app to use the above rules
app.use(router);
app.listen(PORT, error => {
console.log(`listening on ${PORT} from the server`);
if (error) {
console.log(error);
}
});
This is what my kubernetes config looks like
apiVersion: v1
kind: Service
metadata:
name: <NAME>
labels:
app: <APP>
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <CERT>
# Only run SSL on the port named "https" below.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
type: LoadBalancer
selector:
app: <APP>
ports:
- port: 443
targetPort: 3000
protocol: TCP
name: https
- port: 80
targetPort: 3000
protocol: TCP
name: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: <DEPLOYMENT_NAME>
labels:
app: <APP>
spec:
replicas: 1
selector:
matchLabels:
app: <APP>
template:
metadata:
labels:
app: <APP>
spec:
containers:
- name: <CONTAINER_NAME>
image: DOCKER_IMAGE_NAME
imagePullPolicy: Always
env:
- name: VERSION_INFO
value: "1.0"
- name: BUILD_DATE
value: "1.0"
ports:
- containerPort: 3000
I successfully hit the redirect...but the browser does not actually redirect. How do I get it to redirect from http to https?
Relatedly, from googling around I keep seeing that people are using an Ingress AND a Load Balancer - why would I need both?
When you LoadBalancer type service in EKS it will either create a classic or a network load balancer.None of these support http to https redirection. You need an application load balancer(ALB) which supports http to https redirection.In EKS to use ALB you need to use AWS ALB ingress controller. Once you have ingress controller setup you can use annotation to redirect http to https
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: default
name: ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:xxxx:certificate/xxxxxx
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}