Unable to connect to mongodb atlas with kubernetes pod. Tried almost everything available on internet, but no luck.
yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: weare-auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: weare-auth
template:
metadata:
labels:
app: weare-auth
spec:
containers:
- name: weare-auth
image: <docker-username>/weare-auth
env:
- name: PORT
value: "3001"
restartPolicy: Always
dnsPolicy: Default
---
apiVersion: v1
kind: Service
metadata:
name: weare-auth-srv
spec:
selector:
app: weare-auth
ports:
- name: weare-auth
protocol: TCP
port: 3001
targetPort: 3001
and here's my express code
import mongoose from "mongoose";
import { app } from "./app";
const start = async () => {
try {
await mongoose.connect(
"mongodb://<username>:<password>#test-clus.wpg0x.mongodb.net/auth",
{
useNewUrlParser: true,
useUnifiedTopology: true,
useCreateIndex: true,
}
);
console.log("Connected to mongodb");
const PORT = process.env.PORT || 3001;
app.listen(PORT, () => {
console.log(`Listening on port ${PORT}`);
});
} catch (err) {
console.error(err);
}
};
start();
Here are the logs
Find the screenshot of error logs here
I've masked the credentials. Also I am able to connect to mongodb atlas cluster via shell and Robo3T. Also tried setting up the dnsPolicy like it was mentioned in on the post, but no luck.
Any idea what am I missing here?
Related
I am trying to create a mern deployment on k8s but I have no idea how can do this. I created a MongoDB pod and connect it with nodejs pod but when I use node js ip in postman not able to make API calls .this is my backend yaml.
so how can I connect with others I tried a lot of options on the internet but did not find the correct solution for this.
apiVersion: apps/v1
kind: Deployment
metadata:
name: todo-server-app
spec:
replicas: 1
selector:
matchLabels:
app: todo-server-app
template:
metadata:
labels:
app: todo-server-app
spec:
containers:
- image: summer07/backend:2.0
name: container1
ports:
- containerPort: 5000
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: todo-server-app
labels:
app: todo-server-app
spec:
selector:
app: todo-server-app
type: LoadBalancer
ports:
- protocol: TCP
port: 5000
targetPort: 5000
nodePort: 30020
And this is my mongodb pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
containers:
- image: mongo:4.0.9-xenial
name: container1
ports:
- containerPort: 27017
command:
- mongod
- "--bind_ip"
- "0.0.0.0"
imagePullPolicy: Always
volumeMounts:
- mountPath: /data/db
name: todo-mongo-vol
volumes:
- name: todo-mongo-vol
persistentVolumeClaim:
claimName: todo-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
app: mongo
spec:
type: LoadBalancer
selector:
app: mongo
ports:
- protocol: TCP
port: 27017
nodePort: 30017
targetPort: 27017
I see Ingress can any one tell how to write ingress for this:
app.use(express.json());
app.use(cors());
/* WELCOME */
app.get("/", (req, res) => {
res.send("WELCOME TO MY TODO APP BACKEND");
});
app.listen(PORT, () => {
console.log(`Server listening on 5000`);
});
mongoose
.connect("mongodb://192.168.59.114:30017", {
useNewUrlParser: true,
useUnifiedTopology: true,
})
.then(() => {
console.log("MongoDB connected successfully ggggg");
})
.catch((err) => {
console.log(err);
console.log("Unable to connect !!!!!!!!!!! nhi hu rha");
});
i try this and aslo try with mongodb service and 27017 port,192.168.59.114 is my minikube ip
they connects but when i open nodejs serive in browser shows site cant reached ,in postman show error?
this is my mongodb pod log :
2023-02-06T10:43:52.675+0000 I NETWORK [listener] connection accepted from 10.244.0.1:19311 #2 (2 connections now open)
2023-02-06T10:43:52.682+0000 I NETWORK [conn2] received client metadata from 10.244.0.1:19311 conn2: { driver: { name: "nodejs|Mongoose", version: "4.13.0" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.10.57" }, platform: "Node.js v19.6.0, LE (unified)", version: "4.13.0|6.9.0" }
all my pods are running and i am testing both server, client and database using mongodb atlas, the app works but when i change to use mongodb pod, I have 4000 (bad server request)
mongodb works fine, I test it with mongosh ' kubectl exec -it //bin//sh
mongo -u user -p password
show database (works)
show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
test 0.000GB
db.test.find()
{ "_id" : ObjectId("62fb942345a2c5286fc7455f"), "todo" : "first element in this database" }
//controller server js before i pushed to docker hub
const Todo = require('./models')
const add = async (req, res) => {
try {
const {todo} = req.body
const new_todo = await Todo.insertOne({todo:todo})
return res.status(200)
} catch (err) {
return res.status(400)
}
}
const all = async (req, res) => {
try {
const todo = await Todo.find()
return res.status(200).json({ todo })
} catch (err) {
return res.status(404)
}
}
module.exports = {
add,
all
}
//****************************************************
mongodb service.yaml
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
/*************************************
I'm not sure of a single thing, url that express will use to connect to mongodb with ClustrIP
"mongodb://mongodb-service:27017"
/*************************************
require('dotenv').config()
const express = require('express')
const cors = require('cors')
const mongoose = require('mongoose')
const app = express()
const port = process.env.PORT
const url = process.env.URL || 'mongodb+srv://najib:123#cluster0.z8y8j9v.mongodb.net/microTest?retryWrites=true&w=majority' // this is for test with mongodb atlas ans is working just fine , but when over it with this ["mongodb://mongodb-service:27017"] dont work
const myRoutes = require('./routes')
app.use(cors())
app.use(express.json())
mongoose
.connect(url, {
useNewUrlParser: true,
useUnifiedTopology: true
})
.then(() =>{
console.log("MongoDB successfully connected")
app.use('/', myRoutes)
} )
.catch(err => console.log(err))
app.listen(port, () => {
console.log(`Server is running at port ${port}`)
})
mongodb deployment
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-pv
labels:
app: mongo-db
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /data/pv/wp-html
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodb-pvc
labels:
app: mongo-db
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: creds
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: creds
key: password
volumeMounts:
- name: mongodb-storage
mountPath: /data/db
volumes:
- name: mongodb-storage
persistentVolumeClaim:
claimName: mongodb-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
type: NodePort
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
I'm trying to run Socket IO with multiple nodes using Socket IO mongodb Adapter. But I'm receiving 400 Bad Request when I'm deploying on my Kubernetes Cluster.Both my api and frontend are on the same domain. The service is working fine in Firefox but it is failing in Google chrome and edge.
Browser Error Image
Server
const server = app.listen(port, () => {
console.log(`Server running on port ${port}`);
});
const io = require("socket.io")(server, {
path: "/appnotification/socket.io",
cors: {
origin: ["http://localhost:3000", process.env.SETTYL_LIVE],
methods: ["GET", "POST"],
transports: ["websocket", "polling"],
credentials: true,
}
});
mongoConnect(async () => {
const client = await getDbClient();
const mongoCollection = client
.db("notificationengine_prod")
.collection("socket_events");
await mongoCollection.createIndex(
{ createdAt: 1 },
{ expireAfterSeconds: 3600, background: true }
);
io.adapter(
createAdapter(mongoCollection, {
addCreatedAtField: true,
})
);
io.on("connect", (socket) => sockets(socket, io));
processConsumer(io);
});
Client
const socket = io.connect(process.env.SOCKET_SERVER_URL, {
path: "/appnotification/socket.io",
withCredentials: true,
});
Kubernetes Ingres
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/session-cookie-path: /?(.*)
nginx.ingress.kubernetes.io/session-cookie-samesite: None
nginx.ingress.kubernetes.io/session-cookie-secure: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/websocket-services: notificationhandlerconsumer
spec:
rules:
- host: sample.golive.com
http:
paths:
- backend:
service:
name: notificationhandlerconsumer
port:
number: 8000
path: /appnotification/?(.*)
pathType: Prefix
Socket Deployment Yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: "notificationhandlerconsumer"
spec:
replicas: 3
selector:
matchLabels:
app: "notificationhandlerconsumer"
template:
metadata:
labels:
app: "notificationhandlerconsumer"
spec:
containers:
- name: "notificationhandlerconsumer"
image: "sample.azurecr.io/notificationhandlerconsumer"
ports:
- containerPort: 8000
------
apiVersion: v1
kind: Service
metadata:
name: "notificationhandlerconsumer"
labels:
app: "notificationhandlerconsumer"
spec:
type: ClusterIP
ports:
- port: 8000
targetPort: 8000
protocol: TCP
name: http
selector:
app: "notificationhandlerconsumer"
I've created 2 services using node-express:
webbff-service: All incoming http requests to the express server to pass through this service
identity-service: To handle user auth and other user details
webbff service will pass on the request to identity service if url matches a respective pattern.
webbff-service: server.js file:
const dotEnv = require('dotenv');
dotEnv.config();
dotEnv.config({ path: `.env.${process.env.NODE_ENV}` });
const express = require('express');
const app = express();
app.use('/webbff/test', (req, res) => {
res.json({ message: 'web bff route working OK!' });
});
const axios = require('axios');
app.use('/webbff/api/identity', (req, res) => {
console.log(req.url);
const url = `http://ingress-nginx-controller.ingress-nginx.svc.cluster.local:8000/api/identity${req.url}`;
axios
.get(url)
.then((response) => {
res.json(response.data);
})
.catch((error) => {
console.log(error);
res.status(500).json({ error });
});
});
// Error handler middleware
const errorMiddleware = require('./error/error-middleware');
app.use(errorMiddleware);
const PORT = process.env.SERVER_PORT || 8000;
const startServer = async () => {
app
.listen(PORT, () => {
console.log(`Server started on port ${PORT}`);
})
.on('error', (err) => {
console.error(err);
process.exit();
});
};
startServer();
Deployment file of webbff-service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webbff-service-depl
spec:
replicas: 1
selector:
matchLabels:
app: webbff-service
template:
metadata:
labels:
app: webbff-service
spec:
containers:
- name: webbff-service
image: ajhavery/webbff-service
---
apiVersion: v1
kind: Service
metadata:
name: webbff-service-svc
spec:
selector:
app: webbff-service
ports:
- name: webbff-service
protocol: TCP
port: 8000
targetPort: 8000
Identity service is a simple node express app - which accepts all urls of format: /api/identity
It has a test route - /api/identity/test to test the routes
Deployment file of Identity service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: identity-service-depl
spec:
replicas: 1
selector:
matchLabels:
app: identity-service
template:
metadata:
labels:
app: identity-service
spec:
containers:
- name: identity-service
image: ajhavery/identity-service
---
apiVersion: v1
kind: Service
metadata:
name: identity-service-svc
spec:
selector:
app: identity-service
ports:
- name: identity-service
protocol: TCP
port: 8000
targetPort: 8000
Both these services are deployed on meraretail.dev - a local service which I achieved by modifying /etc/hosts:
127.0.0.1 meraretail.dev
Skaffold.yaml file used for deployment on local kubernetes cluster with 1 node:
apiVersion: skaffold/v2beta28
kind: Config
deploy:
kubectl:
manifests:
- ./kubernetes/*
build:
local:
push: false # don't push images to dockerhub
artifacts:
- image: ajhavery/webbff-service
context: webbff-service
docker:
dockerfile: Dockerfile
sync:
manual:
- dest: .
src: 'src/**/*.js'
- image: ajhavery/identity-service
context: identity-service
docker:
dockerfile: Dockerfile
sync:
manual:
- dest: .
src: 'src/**/*.js'
Routing between files is handled using ingress-nginx service:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-public-svc
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: meraretail.dev
http:
paths:
- path: '/webbff/?(.*)'
pathType: Prefix
backend:
service:
name: webbff-service-svc
port:
number: 8000
- path: '/api/identity/?(.*)'
pathType: Prefix
backend:
service:
name: identity-service-svc
port:
number: 8000
Though, when I try to access the route: https://meraretail.dev/webbff/api/identity/test
In postman, I receive error - BAD gateway
I'm trying to connect to a socket that is inside a container and deployed on Kubernetes.
Locally everything works fine but when deployed it throws an error on connect. I tried with different options but with no success.
Client code
const ENDPOINT = "https://traveling.dev/api/chat"; // this will go to the endpoint of service where socket is running
const chatSocket = io(ENDPOINT, {
rejectUnauthorized: false,
forceNew: true,
secure: false,
});
chatSocket.on("connect_error", (err) => {
console.log(err);
console.log(`connect_error due to ${err.message}`);
});
console.log("CS", chatSocket);
Server code
const app = express();
app.set("trust proxy", true);
app.use(cors());
const server = http.createServer(app);
const io = new Server(server, {
cors: {
origin: "*",
methods: ["*"],
allowedHeaders: ["*"],
},
});
io.on("connection", (socket) => {
console.log("Socket succesfully connected with id: " + socket.id);
});
const start = async () => {
server.listen(3000, () => {
console.log("Started");
});
};
start();
The thing is code is irrelevant here cause locally it all works fine but I posted it anyways.
What can cause this while containerizing it and putting it on Kubernetes?
And the console log just says server error
Error: server error
at Socket.onPacket (socket.js:397)
at XHR.push../node_modules/component-emitter/index.js.Emitter.emit (index.js:145)
at XHR.onPacket (transport.js:107)
at callback (polling.js:98)
at Array.forEach (<anonymous>)
at XHR.onData (polling.js:102)
at Request.push../node_modules/component-emitter/index.js.Emitter.emit (index.js:145)
at Request.onData (polling-xhr.js:232)
at Request.onLoad (polling-xhr.js:283)
at XMLHttpRequest.xhr.onreadystatechange (polling-xhr.js:187)
Does anyone have any suggestions on what may cause this and how to fix it?
Also, any idea on how to get more information about the error would be appreciated.
This is the YAML file that creates the Service and a Pod.
apiVersion: apps/v1
kind: Deployment
metadata:
name: chat-depl
spec:
replicas: 1
selector:
matchLabels:
app: chat
template:
metadata:
labels:
app: chat
spec:
containers:
- name: chat
image: us.gcr.io/forward-emitter-321609/chat-service
---
apiVersion: v1
kind: Service
metadata:
name: chat-srv
spec:
selector:
app: chat
ports:
- name: chat
protocol: TCP
port: 3000
targetPort: 3000
I'm using a loadbalancer on GKE with nginx which IP address is mapped to traveling.dev
This is how my ingress routing service config looks like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Access-Control-Allow-Origin: $http_origin";
spec:
rules:
- host: traveling.dev
http:
paths:
- path: /api/chat/?(.*)
backend:
serviceName: chat-srv
servicePort: 3000
- path: /?(.*)
backend:
serviceName: client-srv
servicePort: 3000
Thanks!
Nginx Ingress by default support WebSocket proxying, but you need to configure it.
For this you need to add annotation custom configuration snippet.
You can refer this already answered stackoverflow question.
Nginx ingress controller websocket support