ingress-nginx: Unable to call one service from another using axios - node.js

I've created 2 services using node-express:
webbff-service: All incoming http requests to the express server to pass through this service
identity-service: To handle user auth and other user details
webbff service will pass on the request to identity service if url matches a respective pattern.
webbff-service: server.js file:
const dotEnv = require('dotenv');
dotEnv.config();
dotEnv.config({ path: `.env.${process.env.NODE_ENV}` });
const express = require('express');
const app = express();
app.use('/webbff/test', (req, res) => {
res.json({ message: 'web bff route working OK!' });
});
const axios = require('axios');
app.use('/webbff/api/identity', (req, res) => {
console.log(req.url);
const url = `http://ingress-nginx-controller.ingress-nginx.svc.cluster.local:8000/api/identity${req.url}`;
axios
.get(url)
.then((response) => {
res.json(response.data);
})
.catch((error) => {
console.log(error);
res.status(500).json({ error });
});
});
// Error handler middleware
const errorMiddleware = require('./error/error-middleware');
app.use(errorMiddleware);
const PORT = process.env.SERVER_PORT || 8000;
const startServer = async () => {
app
.listen(PORT, () => {
console.log(`Server started on port ${PORT}`);
})
.on('error', (err) => {
console.error(err);
process.exit();
});
};
startServer();
Deployment file of webbff-service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webbff-service-depl
spec:
replicas: 1
selector:
matchLabels:
app: webbff-service
template:
metadata:
labels:
app: webbff-service
spec:
containers:
- name: webbff-service
image: ajhavery/webbff-service
---
apiVersion: v1
kind: Service
metadata:
name: webbff-service-svc
spec:
selector:
app: webbff-service
ports:
- name: webbff-service
protocol: TCP
port: 8000
targetPort: 8000
Identity service is a simple node express app - which accepts all urls of format: /api/identity
It has a test route - /api/identity/test to test the routes
Deployment file of Identity service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: identity-service-depl
spec:
replicas: 1
selector:
matchLabels:
app: identity-service
template:
metadata:
labels:
app: identity-service
spec:
containers:
- name: identity-service
image: ajhavery/identity-service
---
apiVersion: v1
kind: Service
metadata:
name: identity-service-svc
spec:
selector:
app: identity-service
ports:
- name: identity-service
protocol: TCP
port: 8000
targetPort: 8000
Both these services are deployed on meraretail.dev - a local service which I achieved by modifying /etc/hosts:
127.0.0.1 meraretail.dev
Skaffold.yaml file used for deployment on local kubernetes cluster with 1 node:
apiVersion: skaffold/v2beta28
kind: Config
deploy:
kubectl:
manifests:
- ./kubernetes/*
build:
local:
push: false # don't push images to dockerhub
artifacts:
- image: ajhavery/webbff-service
context: webbff-service
docker:
dockerfile: Dockerfile
sync:
manual:
- dest: .
src: 'src/**/*.js'
- image: ajhavery/identity-service
context: identity-service
docker:
dockerfile: Dockerfile
sync:
manual:
- dest: .
src: 'src/**/*.js'
Routing between files is handled using ingress-nginx service:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-public-svc
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: meraretail.dev
http:
paths:
- path: '/webbff/?(.*)'
pathType: Prefix
backend:
service:
name: webbff-service-svc
port:
number: 8000
- path: '/api/identity/?(.*)'
pathType: Prefix
backend:
service:
name: identity-service-svc
port:
number: 8000
Though, when I try to access the route: https://meraretail.dev/webbff/api/identity/test
In postman, I receive error - BAD gateway

Related

how to send data from express server to mongodb database in kubernetes using service name

all my pods are running and i am testing both server, client and database using mongodb atlas, the app works but when i change to use mongodb pod, I have 4000 (bad server request)
mongodb works fine, I test it with mongosh ' kubectl exec -it //bin//sh
mongo -u user -p password
show database (works)
show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
test 0.000GB
db.test.find()
{ "_id" : ObjectId("62fb942345a2c5286fc7455f"), "todo" : "first element in this database" }
//controller server js before i pushed to docker hub
const Todo = require('./models')
const add = async (req, res) => {
try {
const {todo} = req.body
const new_todo = await Todo.insertOne({todo:todo})
return res.status(200)
} catch (err) {
return res.status(400)
}
}
const all = async (req, res) => {
try {
const todo = await Todo.find()
return res.status(200).json({ todo })
} catch (err) {
return res.status(404)
}
}
module.exports = {
add,
all
}
//****************************************************
mongodb service.yaml
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
/*************************************
I'm not sure of a single thing, url that express will use to connect to mongodb with ClustrIP
"mongodb://mongodb-service:27017"
/*************************************
require('dotenv').config()
const express = require('express')
const cors = require('cors')
const mongoose = require('mongoose')
const app = express()
const port = process.env.PORT
const url = process.env.URL || 'mongodb+srv://najib:123#cluster0.z8y8j9v.mongodb.net/microTest?retryWrites=true&w=majority' // this is for test with mongodb atlas ans is working just fine , but when over it with this ["mongodb://mongodb-service:27017"] dont work
const myRoutes = require('./routes')
app.use(cors())
app.use(express.json())
mongoose
.connect(url, {
useNewUrlParser: true,
useUnifiedTopology: true
})
.then(() =>{
console.log("MongoDB successfully connected")
app.use('/', myRoutes)
} )
.catch(err => console.log(err))
app.listen(port, () => {
console.log(`Server is running at port ${port}`)
})
mongodb deployment
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-pv
labels:
app: mongo-db
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /data/pv/wp-html
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodb-pvc
labels:
app: mongo-db
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: creds
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: creds
key: password
volumeMounts:
- name: mongodb-storage
mountPath: /data/db
volumes:
- name: mongodb-storage
persistentVolumeClaim:
claimName: mongodb-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
type: NodePort
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017

Facing 400 Bad Request (Session ID Unknown) using Socket IO with Kubernetes Ingress

I'm trying to run Socket IO with multiple nodes using Socket IO mongodb Adapter. But I'm receiving 400 Bad Request when I'm deploying on my Kubernetes Cluster.Both my api and frontend are on the same domain. The service is working fine in Firefox but it is failing in Google chrome and edge.
Browser Error Image
Server
const server = app.listen(port, () => {
console.log(`Server running on port ${port}`);
});
const io = require("socket.io")(server, {
path: "/appnotification/socket.io",
cors: {
origin: ["http://localhost:3000", process.env.SETTYL_LIVE],
methods: ["GET", "POST"],
transports: ["websocket", "polling"],
credentials: true,
}
});
mongoConnect(async () => {
const client = await getDbClient();
const mongoCollection = client
.db("notificationengine_prod")
.collection("socket_events");
await mongoCollection.createIndex(
{ createdAt: 1 },
{ expireAfterSeconds: 3600, background: true }
);
io.adapter(
createAdapter(mongoCollection, {
addCreatedAtField: true,
})
);
io.on("connect", (socket) => sockets(socket, io));
processConsumer(io);
});
Client
const socket = io.connect(process.env.SOCKET_SERVER_URL, {
path: "/appnotification/socket.io",
withCredentials: true,
});
Kubernetes Ingres
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/session-cookie-path: /?(.*)
nginx.ingress.kubernetes.io/session-cookie-samesite: None
nginx.ingress.kubernetes.io/session-cookie-secure: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/websocket-services: notificationhandlerconsumer
spec:
rules:
- host: sample.golive.com
http:
paths:
- backend:
service:
name: notificationhandlerconsumer
port:
number: 8000
path: /appnotification/?(.*)
pathType: Prefix
Socket Deployment Yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: "notificationhandlerconsumer"
spec:
replicas: 3
selector:
matchLabels:
app: "notificationhandlerconsumer"
template:
metadata:
labels:
app: "notificationhandlerconsumer"
spec:
containers:
- name: "notificationhandlerconsumer"
image: "sample.azurecr.io/notificationhandlerconsumer"
ports:
- containerPort: 8000
------
apiVersion: v1
kind: Service
metadata:
name: "notificationhandlerconsumer"
labels:
app: "notificationhandlerconsumer"
spec:
type: ClusterIP
ports:
- port: 8000
targetPort: 8000
protocol: TCP
name: http
selector:
app: "notificationhandlerconsumer"

Kubernetes accessing one pod url from another

I am creating a proxy server with node js and wants to proxy request coming to one pod to other pods in single node cluster of kubernetes. This is my node js code below ,,,,
const express = require("express");
const httpProxy = require("express-http-proxy");
const app = express();
const serviceone = httpProxy("serviceone:3000"); // I am using service names here but its not working
// Authentication
app.use((req, res, next) => {
// TODO: my authentication logic
console.log("Always.....");
next();
});
app.get("/", (req, res, next) => {
res.json({ message: "Api Gateway Working" });
});
// Proxy request
app.get("/:data/api", (req, res, next) => {
console.log("Request Recieved");
serviceone(req, res, next);
});
app.listen(5000, () => {
console.log("Api Gateway Running");
});
And these are my services.yml files
apiVersion: apps/v1
kind: Deployment
metadata:
name: serviceone
labels:
app: serviceone
spec:
replicas: 1
selector:
matchLabels:
app: serviceone
template:
metadata:
labels:
app: serviceone
spec:
containers:
- name: serviceone
image: swa/serviceone
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: serviceone
spec:
selector:
app: serviceone
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 31000
type: LoadBalancer
What name should I use in http proxy so that it can proxy requests?
I have tried serviceone:3000, http://serviceone:3000, http://localhost:3000 (won't work for different pods). Any help would be really appreciated
Edit - Node js apigateway pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: apigateway
labels:
app: apigateway
spec:
replicas: 1
selector:
matchLabels:
app: apigateway
template:
metadata:
labels:
app: apigateway
spec:
containers:
- name: apigateway
image: swa/apigateway3
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: apigateway
spec:
selector:
app: apigateway
ports:
- protocol: TCP
port: 5000
targetPort: 5000
nodePort: 31005
type: LoadBalancer
In my node js application I changed urls to
const serviceone = httpProxy('serviceone.default.svc.cluster.local:3000');
by following this link https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services
but still no luck
I am getting the error
Error: getaddrinfo EAI_AGAIN servicethree.sock-shop.svc.cluster.local
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:66:26)

How do you have a client deployment communicate with the api deployment in kubernetes

All I ever get are CORS errors while on localhost and in the cloud. It works if I manually type in localhost or I manually get the service external IP and input that into the k8s deployment file before I deploy it, but the ability to automate this is impossibly if I have to launch the services, get the external IP and then put that into the configs before I launch each time.
API service
apiVersion: v1
kind: Service
metadata:
labels:
app: api
name: api-service
spec:
ports:
- port: 8080 # expose the service on internal port 80
protocol: TCP
targetPort: 8080 # our nodejs app listens on port 8080
selector:
app: api # select this application to service
type: LoadBalancer
status:
loadBalancer: {}
Client Service
apiVersion: v1
kind: Service
metadata:
name: client-service
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: client
type: LoadBalancer
status:
loadBalancer: {}
API deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: api
name: api-deployment
spec:
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- image: mjwrazor/docker-js-stack-api:latest
name: api-container
imagePullPolicy: IfNotPresent
resources: {}
stdin: true
tty: true
workingDir: /app
ports:
- containerPort: 8080
args:
- npm
- run
- start
envFrom:
- configMapRef:
name: server-side-configs
restartPolicy: Always
volumes: null
serviceAccountName: ""
status: {}
Client Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: client
name: client-deployment
spec:
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
restartPolicy: Always
serviceAccountName: ""
containers:
- image: mjwrazor/docker-js-stack-client:latest
name: client-container
imagePullPolicy: IfNotPresent
resources: {}
ports:
- containerPort: 80
status: {}
I tried adding an ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, DELETE, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-origin: http://client-service.default
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
spec:
rules:
- host: api-development.default
http:
paths:
- backend:
serviceName: api-service
servicePort: 8080
But didn't help either. here is the server.js
const express = require("express");
const bodyParser = require("body-parser");
const cors = require("cors");
const app = express();
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
app.use(cors());
app.get("/", (req, res) => {
res.json({ message: "Welcome" });
});
require("./app/routes/customer.routes.js")(app);
// set port, listen for requests
const PORT = process.env.PORT || 8080;
app.listen(PORT, () => {
console.log(`Server is running on port ${PORT}.`);
});
But like I said I am trying to get this to resolve via the hostnames of the services and not have to use the external IP, is this even possible or did I misunderstand something along the way.
The client sends an axios request. Cannot use environment variables since you can't inject environment variables from k8s after the project is been build through webpack and docker into an image. I did find a really hacky way of creating a file with window global variables and then have k8s overwrite that file with new window variables. But again I have to get the external IP of the api first then do that.
As we discussed in the comments you need to get a real domain name in order to make it work as Automatic DNS resolution in your case basically requires it.

Minikube with Istio Service Unavailable (http status 503) Node.js connecting to Etcd

I have been working on a simple Node.js application that SETs and GETs a key from etcd using Istio to connect the two services together. I have tried a few variations but keep seeing the same error returned.
nodeAppTesting failed(etcd-operator) ->{"errors":[{"server":"http://etcd-operator:2379","httperror":null,"httpstatus":503,"httpbody":"upstream connect error or disconnect/reset before headers","response":{"statusCode":503,"body":"upstream connect error or disconnect/reset before headers","headers":{"content-length":"57","content-type":"text/plain","date":"Thu, 08 Jun 2017 17:17:04 GMT","server":"envoy","x-envoy-upstream-service-time":"5"},"request":{"uri":{"protocol":"http:","slashes":true,"auth":null,"host":"etcd-operator:2379","port":"2379","hostname":"etcd-operator","hash":null,"search":null,"query":null,"pathname":"/v2/keys/testKey","path":"/v2/keys/testKey","href":"http://etcd-operator:2379/v2/keys/testKey"},"method":"GET","headers":{"accept":"application/json"}}},"timestamp":"2017-06-08T17:17:04.544Z"}],"retries":0}
Looking at the proxy logs, I can see that client and server proxies are involved in the communication (and this is verified I think in seeing envoy in the server header).
Attaching the Node.js app and the deployment.yaml.
server.js
var http = require('http');
var Etcd = require('node-etcd');
var fs = require('fs');
var httpClient = require('request');
var handleRequest = function(request, response) {
var scheme = "http";
var ipAddress = "etcd-operator"
var port = "2379";
var connectionAddress = scheme +"://" + ipAddress +":" + port;
console.log('Received request for URL: ' + request.url + " connecting to " + connectionAddress);
var etcd = new Etcd([connectionAddress] /*, options */);
etcd.set("testKey" , "foo");
etcd.get("testKey", function(err, res){
if(!err){
response.writeHead(200);
response.write("nodeAppTesting("+ ipAddress+") ->"+ JSON.stringify(res) ) ;
response.end();
}else{
response.writeHead(500);
response.write("nodeAppTesting failed("+ ipAddress+") ->"+ JSON.stringify(err) ) ;
console.log("Encountered error during runtime", JSON.stringify(err));
response.end();
}
});
}
var www = http.createServer(handleRequest);
www.listen(8080);
console.log("App up and running on port 8080")
deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: etcd-node
labels:
app: etcd-node
spec:
ports:
- port: 8080
name: http
selector:
app: etcd-node
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: etcd-node-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: etcd-node
spec:
containers:
- name: etcd-node
image: todkap/etcd-node:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
---
##################################################################################################
# etcd service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: etcd-operator
labels:
app: etcd-operator
spec:
ports:
- port: 2379
targetPort: 2379
name: http
selector:
app: etcd-operator
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: etcd-operator
spec:
replicas: 1
template:
metadata:
labels:
name: etcd-operator
app: etcd-operator
version: v1
spec:
containers:
- name: etcd-operator
image: quay.io/coreos/etcd-operator:v0.2.6
imagePullPolicy: IfNotPresent
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 2379
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway2
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: etcd-node
servicePort: 8080
- http:
paths:
- path: /
backend:
serviceName: etcd-operator
servicePort: 2379
- path: /v2/keys/*
backend:
serviceName: etcd-operator
servicePort: 2379
---
I was able to resolve this issues reported here. I will be publishing a recipe demonstrating the flow sometime this week. For now, we can consider this closed. In the future, I will move to the forums or post on issues. Be on the look out for the article (I will update this post with the link when available).
Thanks for the help, guidance and suggestions.
Main issues were consistency in referencing the etcd service, consistency in referencing my node app as a Deployment, Service and Ingress and then finally the exposing the NodePort.
update
Published a article demonstrating the working flow.
This is probably a better question for https://groups.google.com/forum/#!forum/istio-users or https://github.com/istio/issues/issues
but it looks like your application isn't up - can you check
kubectl get pods and see nothing is still pending ?

Resources