I am creating a proxy server with node js and wants to proxy request coming to one pod to other pods in single node cluster of kubernetes. This is my node js code below ,,,,
const express = require("express");
const httpProxy = require("express-http-proxy");
const app = express();
const serviceone = httpProxy("serviceone:3000"); // I am using service names here but its not working
// Authentication
app.use((req, res, next) => {
// TODO: my authentication logic
console.log("Always.....");
next();
});
app.get("/", (req, res, next) => {
res.json({ message: "Api Gateway Working" });
});
// Proxy request
app.get("/:data/api", (req, res, next) => {
console.log("Request Recieved");
serviceone(req, res, next);
});
app.listen(5000, () => {
console.log("Api Gateway Running");
});
And these are my services.yml files
apiVersion: apps/v1
kind: Deployment
metadata:
name: serviceone
labels:
app: serviceone
spec:
replicas: 1
selector:
matchLabels:
app: serviceone
template:
metadata:
labels:
app: serviceone
spec:
containers:
- name: serviceone
image: swa/serviceone
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: serviceone
spec:
selector:
app: serviceone
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 31000
type: LoadBalancer
What name should I use in http proxy so that it can proxy requests?
I have tried serviceone:3000, http://serviceone:3000, http://localhost:3000 (won't work for different pods). Any help would be really appreciated
Edit - Node js apigateway pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: apigateway
labels:
app: apigateway
spec:
replicas: 1
selector:
matchLabels:
app: apigateway
template:
metadata:
labels:
app: apigateway
spec:
containers:
- name: apigateway
image: swa/apigateway3
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: apigateway
spec:
selector:
app: apigateway
ports:
- protocol: TCP
port: 5000
targetPort: 5000
nodePort: 31005
type: LoadBalancer
In my node js application I changed urls to
const serviceone = httpProxy('serviceone.default.svc.cluster.local:3000');
by following this link https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services
but still no luck
I am getting the error
Error: getaddrinfo EAI_AGAIN servicethree.sock-shop.svc.cluster.local
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:66:26)
Related
I am trying to create a mern deployment on k8s but I have no idea how can do this. I created a MongoDB pod and connect it with nodejs pod but when I use node js ip in postman not able to make API calls .this is my backend yaml.
so how can I connect with others I tried a lot of options on the internet but did not find the correct solution for this.
apiVersion: apps/v1
kind: Deployment
metadata:
name: todo-server-app
spec:
replicas: 1
selector:
matchLabels:
app: todo-server-app
template:
metadata:
labels:
app: todo-server-app
spec:
containers:
- image: summer07/backend:2.0
name: container1
ports:
- containerPort: 5000
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: todo-server-app
labels:
app: todo-server-app
spec:
selector:
app: todo-server-app
type: LoadBalancer
ports:
- protocol: TCP
port: 5000
targetPort: 5000
nodePort: 30020
And this is my mongodb pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
containers:
- image: mongo:4.0.9-xenial
name: container1
ports:
- containerPort: 27017
command:
- mongod
- "--bind_ip"
- "0.0.0.0"
imagePullPolicy: Always
volumeMounts:
- mountPath: /data/db
name: todo-mongo-vol
volumes:
- name: todo-mongo-vol
persistentVolumeClaim:
claimName: todo-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
app: mongo
spec:
type: LoadBalancer
selector:
app: mongo
ports:
- protocol: TCP
port: 27017
nodePort: 30017
targetPort: 27017
I see Ingress can any one tell how to write ingress for this:
app.use(express.json());
app.use(cors());
/* WELCOME */
app.get("/", (req, res) => {
res.send("WELCOME TO MY TODO APP BACKEND");
});
app.listen(PORT, () => {
console.log(`Server listening on 5000`);
});
mongoose
.connect("mongodb://192.168.59.114:30017", {
useNewUrlParser: true,
useUnifiedTopology: true,
})
.then(() => {
console.log("MongoDB connected successfully ggggg");
})
.catch((err) => {
console.log(err);
console.log("Unable to connect !!!!!!!!!!! nhi hu rha");
});
i try this and aslo try with mongodb service and 27017 port,192.168.59.114 is my minikube ip
they connects but when i open nodejs serive in browser shows site cant reached ,in postman show error?
this is my mongodb pod log :
2023-02-06T10:43:52.675+0000 I NETWORK [listener] connection accepted from 10.244.0.1:19311 #2 (2 connections now open)
2023-02-06T10:43:52.682+0000 I NETWORK [conn2] received client metadata from 10.244.0.1:19311 conn2: { driver: { name: "nodejs|Mongoose", version: "4.13.0" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.10.57" }, platform: "Node.js v19.6.0, LE (unified)", version: "4.13.0|6.9.0" }
I'm trying to run Socket IO with multiple nodes using Socket IO mongodb Adapter. But I'm receiving 400 Bad Request when I'm deploying on my Kubernetes Cluster.Both my api and frontend are on the same domain. The service is working fine in Firefox but it is failing in Google chrome and edge.
Browser Error Image
Server
const server = app.listen(port, () => {
console.log(`Server running on port ${port}`);
});
const io = require("socket.io")(server, {
path: "/appnotification/socket.io",
cors: {
origin: ["http://localhost:3000", process.env.SETTYL_LIVE],
methods: ["GET", "POST"],
transports: ["websocket", "polling"],
credentials: true,
}
});
mongoConnect(async () => {
const client = await getDbClient();
const mongoCollection = client
.db("notificationengine_prod")
.collection("socket_events");
await mongoCollection.createIndex(
{ createdAt: 1 },
{ expireAfterSeconds: 3600, background: true }
);
io.adapter(
createAdapter(mongoCollection, {
addCreatedAtField: true,
})
);
io.on("connect", (socket) => sockets(socket, io));
processConsumer(io);
});
Client
const socket = io.connect(process.env.SOCKET_SERVER_URL, {
path: "/appnotification/socket.io",
withCredentials: true,
});
Kubernetes Ingres
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/session-cookie-path: /?(.*)
nginx.ingress.kubernetes.io/session-cookie-samesite: None
nginx.ingress.kubernetes.io/session-cookie-secure: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/websocket-services: notificationhandlerconsumer
spec:
rules:
- host: sample.golive.com
http:
paths:
- backend:
service:
name: notificationhandlerconsumer
port:
number: 8000
path: /appnotification/?(.*)
pathType: Prefix
Socket Deployment Yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: "notificationhandlerconsumer"
spec:
replicas: 3
selector:
matchLabels:
app: "notificationhandlerconsumer"
template:
metadata:
labels:
app: "notificationhandlerconsumer"
spec:
containers:
- name: "notificationhandlerconsumer"
image: "sample.azurecr.io/notificationhandlerconsumer"
ports:
- containerPort: 8000
------
apiVersion: v1
kind: Service
metadata:
name: "notificationhandlerconsumer"
labels:
app: "notificationhandlerconsumer"
spec:
type: ClusterIP
ports:
- port: 8000
targetPort: 8000
protocol: TCP
name: http
selector:
app: "notificationhandlerconsumer"
I've created 2 services using node-express:
webbff-service: All incoming http requests to the express server to pass through this service
identity-service: To handle user auth and other user details
webbff service will pass on the request to identity service if url matches a respective pattern.
webbff-service: server.js file:
const dotEnv = require('dotenv');
dotEnv.config();
dotEnv.config({ path: `.env.${process.env.NODE_ENV}` });
const express = require('express');
const app = express();
app.use('/webbff/test', (req, res) => {
res.json({ message: 'web bff route working OK!' });
});
const axios = require('axios');
app.use('/webbff/api/identity', (req, res) => {
console.log(req.url);
const url = `http://ingress-nginx-controller.ingress-nginx.svc.cluster.local:8000/api/identity${req.url}`;
axios
.get(url)
.then((response) => {
res.json(response.data);
})
.catch((error) => {
console.log(error);
res.status(500).json({ error });
});
});
// Error handler middleware
const errorMiddleware = require('./error/error-middleware');
app.use(errorMiddleware);
const PORT = process.env.SERVER_PORT || 8000;
const startServer = async () => {
app
.listen(PORT, () => {
console.log(`Server started on port ${PORT}`);
})
.on('error', (err) => {
console.error(err);
process.exit();
});
};
startServer();
Deployment file of webbff-service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webbff-service-depl
spec:
replicas: 1
selector:
matchLabels:
app: webbff-service
template:
metadata:
labels:
app: webbff-service
spec:
containers:
- name: webbff-service
image: ajhavery/webbff-service
---
apiVersion: v1
kind: Service
metadata:
name: webbff-service-svc
spec:
selector:
app: webbff-service
ports:
- name: webbff-service
protocol: TCP
port: 8000
targetPort: 8000
Identity service is a simple node express app - which accepts all urls of format: /api/identity
It has a test route - /api/identity/test to test the routes
Deployment file of Identity service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: identity-service-depl
spec:
replicas: 1
selector:
matchLabels:
app: identity-service
template:
metadata:
labels:
app: identity-service
spec:
containers:
- name: identity-service
image: ajhavery/identity-service
---
apiVersion: v1
kind: Service
metadata:
name: identity-service-svc
spec:
selector:
app: identity-service
ports:
- name: identity-service
protocol: TCP
port: 8000
targetPort: 8000
Both these services are deployed on meraretail.dev - a local service which I achieved by modifying /etc/hosts:
127.0.0.1 meraretail.dev
Skaffold.yaml file used for deployment on local kubernetes cluster with 1 node:
apiVersion: skaffold/v2beta28
kind: Config
deploy:
kubectl:
manifests:
- ./kubernetes/*
build:
local:
push: false # don't push images to dockerhub
artifacts:
- image: ajhavery/webbff-service
context: webbff-service
docker:
dockerfile: Dockerfile
sync:
manual:
- dest: .
src: 'src/**/*.js'
- image: ajhavery/identity-service
context: identity-service
docker:
dockerfile: Dockerfile
sync:
manual:
- dest: .
src: 'src/**/*.js'
Routing between files is handled using ingress-nginx service:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-public-svc
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: meraretail.dev
http:
paths:
- path: '/webbff/?(.*)'
pathType: Prefix
backend:
service:
name: webbff-service-svc
port:
number: 8000
- path: '/api/identity/?(.*)'
pathType: Prefix
backend:
service:
name: identity-service-svc
port:
number: 8000
Though, when I try to access the route: https://meraretail.dev/webbff/api/identity/test
In postman, I receive error - BAD gateway
All I ever get are CORS errors while on localhost and in the cloud. It works if I manually type in localhost or I manually get the service external IP and input that into the k8s deployment file before I deploy it, but the ability to automate this is impossibly if I have to launch the services, get the external IP and then put that into the configs before I launch each time.
API service
apiVersion: v1
kind: Service
metadata:
labels:
app: api
name: api-service
spec:
ports:
- port: 8080 # expose the service on internal port 80
protocol: TCP
targetPort: 8080 # our nodejs app listens on port 8080
selector:
app: api # select this application to service
type: LoadBalancer
status:
loadBalancer: {}
Client Service
apiVersion: v1
kind: Service
metadata:
name: client-service
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: client
type: LoadBalancer
status:
loadBalancer: {}
API deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: api
name: api-deployment
spec:
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- image: mjwrazor/docker-js-stack-api:latest
name: api-container
imagePullPolicy: IfNotPresent
resources: {}
stdin: true
tty: true
workingDir: /app
ports:
- containerPort: 8080
args:
- npm
- run
- start
envFrom:
- configMapRef:
name: server-side-configs
restartPolicy: Always
volumes: null
serviceAccountName: ""
status: {}
Client Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: client
name: client-deployment
spec:
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
restartPolicy: Always
serviceAccountName: ""
containers:
- image: mjwrazor/docker-js-stack-client:latest
name: client-container
imagePullPolicy: IfNotPresent
resources: {}
ports:
- containerPort: 80
status: {}
I tried adding an ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, DELETE, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-origin: http://client-service.default
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
spec:
rules:
- host: api-development.default
http:
paths:
- backend:
serviceName: api-service
servicePort: 8080
But didn't help either. here is the server.js
const express = require("express");
const bodyParser = require("body-parser");
const cors = require("cors");
const app = express();
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
app.use(cors());
app.get("/", (req, res) => {
res.json({ message: "Welcome" });
});
require("./app/routes/customer.routes.js")(app);
// set port, listen for requests
const PORT = process.env.PORT || 8080;
app.listen(PORT, () => {
console.log(`Server is running on port ${PORT}.`);
});
But like I said I am trying to get this to resolve via the hostnames of the services and not have to use the external IP, is this even possible or did I misunderstand something along the way.
The client sends an axios request. Cannot use environment variables since you can't inject environment variables from k8s after the project is been build through webpack and docker into an image. I did find a really hacky way of creating a file with window global variables and then have k8s overwrite that file with new window variables. But again I have to get the external IP of the api first then do that.
As we discussed in the comments you need to get a real domain name in order to make it work as Automatic DNS resolution in your case basically requires it.
I have been working on a simple Node.js application that SETs and GETs a key from etcd using Istio to connect the two services together. I have tried a few variations but keep seeing the same error returned.
nodeAppTesting failed(etcd-operator) ->{"errors":[{"server":"http://etcd-operator:2379","httperror":null,"httpstatus":503,"httpbody":"upstream connect error or disconnect/reset before headers","response":{"statusCode":503,"body":"upstream connect error or disconnect/reset before headers","headers":{"content-length":"57","content-type":"text/plain","date":"Thu, 08 Jun 2017 17:17:04 GMT","server":"envoy","x-envoy-upstream-service-time":"5"},"request":{"uri":{"protocol":"http:","slashes":true,"auth":null,"host":"etcd-operator:2379","port":"2379","hostname":"etcd-operator","hash":null,"search":null,"query":null,"pathname":"/v2/keys/testKey","path":"/v2/keys/testKey","href":"http://etcd-operator:2379/v2/keys/testKey"},"method":"GET","headers":{"accept":"application/json"}}},"timestamp":"2017-06-08T17:17:04.544Z"}],"retries":0}
Looking at the proxy logs, I can see that client and server proxies are involved in the communication (and this is verified I think in seeing envoy in the server header).
Attaching the Node.js app and the deployment.yaml.
server.js
var http = require('http');
var Etcd = require('node-etcd');
var fs = require('fs');
var httpClient = require('request');
var handleRequest = function(request, response) {
var scheme = "http";
var ipAddress = "etcd-operator"
var port = "2379";
var connectionAddress = scheme +"://" + ipAddress +":" + port;
console.log('Received request for URL: ' + request.url + " connecting to " + connectionAddress);
var etcd = new Etcd([connectionAddress] /*, options */);
etcd.set("testKey" , "foo");
etcd.get("testKey", function(err, res){
if(!err){
response.writeHead(200);
response.write("nodeAppTesting("+ ipAddress+") ->"+ JSON.stringify(res) ) ;
response.end();
}else{
response.writeHead(500);
response.write("nodeAppTesting failed("+ ipAddress+") ->"+ JSON.stringify(err) ) ;
console.log("Encountered error during runtime", JSON.stringify(err));
response.end();
}
});
}
var www = http.createServer(handleRequest);
www.listen(8080);
console.log("App up and running on port 8080")
deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: etcd-node
labels:
app: etcd-node
spec:
ports:
- port: 8080
name: http
selector:
app: etcd-node
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: etcd-node-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: etcd-node
spec:
containers:
- name: etcd-node
image: todkap/etcd-node:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
---
##################################################################################################
# etcd service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: etcd-operator
labels:
app: etcd-operator
spec:
ports:
- port: 2379
targetPort: 2379
name: http
selector:
app: etcd-operator
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: etcd-operator
spec:
replicas: 1
template:
metadata:
labels:
name: etcd-operator
app: etcd-operator
version: v1
spec:
containers:
- name: etcd-operator
image: quay.io/coreos/etcd-operator:v0.2.6
imagePullPolicy: IfNotPresent
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 2379
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway2
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: etcd-node
servicePort: 8080
- http:
paths:
- path: /
backend:
serviceName: etcd-operator
servicePort: 2379
- path: /v2/keys/*
backend:
serviceName: etcd-operator
servicePort: 2379
---
I was able to resolve this issues reported here. I will be publishing a recipe demonstrating the flow sometime this week. For now, we can consider this closed. In the future, I will move to the forums or post on issues. Be on the look out for the article (I will update this post with the link when available).
Thanks for the help, guidance and suggestions.
Main issues were consistency in referencing the etcd service, consistency in referencing my node app as a Deployment, Service and Ingress and then finally the exposing the NodePort.
update
Published a article demonstrating the working flow.
This is probably a better question for https://groups.google.com/forum/#!forum/istio-users or https://github.com/istio/issues/issues
but it looks like your application isn't up - can you check
kubectl get pods and see nothing is still pending ?