I'm trying to learn about multipath in Kubernetes Ingress. First of all, I'm using minikube for this tutorial, I created a simple Web API using node js.
NodeJS Code
In this nodeJS, I created a simple Web API, with routing and controller
server.js
const express = require ('express');
const routes = require('./routes/tea'); // import the routes
const app = express();
app.use(express.json());
app.use('/', routes); //to use the routes
const listener = app.listen(process.env.PORT || 3000, () => {
console.log('Your app is listening on port ' + listener.address().port)
})
routes/tea.js
const express = require('express');
const router = express.Router();
const teaController = require('../controllers/tea');
router.get('/tea', teaController.getAllTea);
router.post('/tea', teaController.newTea);
router.delete('/tea', teaController.deleteAllTea);
router.get('/tea/:name', teaController.getOneTea);
router.post('/tea/:name', teaController.newComment);
router.delete('/tea/:name', teaController.deleteOneTea);
module.exports = router;
controllers/tea.js
const os = require('os');
//GET '/tea'
const getAllTea = (req, res, next) => {
res.json({message: "GET all tea, " + os.hostname() });
};
//POST '/tea'
const newTea = (req, res, next) => {
res.json({message: "POST new tea, " + os.hostname()});
};
//DELETE '/tea'
const deleteAllTea = (req, res, next) => {
res.json({message: "DELETE all tea, " + os.hostname()});
};
//GET '/tea/:name'
const getOneTea = (req, res, next) => {
res.json({message: "GET 1 tea, os: " + os.hostname() + ", name: " + req.params.name});
};
//POST '/tea/:name'
const newComment = (req, res, next) => {
res.json({message: "POST 1 tea comment, os: " + os.hostname() + ", name: " + req.params.name});
};
//DELETE '/tea/:name'
const deleteOneTea = (req, res, next) => {
res.json({message: "DELETE 1 tea, os: " + os.hostname() + ", name: " + req.params.name});
};
//export controller functions
module.exports = {
getAllTea,
newTea,
deleteAllTea,
getOneTea,
newComment,
deleteOneTea
};
Dockerfile
After that I created a docker image using this Dockerfile
FROM node:18.9.1-slim
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "node", "server.js" ]
Kubernetes Manifest
And then, I created replicaset and service for this docker image
foo-replicaset.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: foo
spec:
selector:
matchLabels:
app: foo
replicas: 3
template:
metadata:
labels:
app: foo
spec:
containers:
- name: foo
image: emriti/tea-app:1.0.0
ports:
- name: http
containerPort: 3000
protocol: TCP
foo-svc-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: foo-nodeport
spec:
type: NodePort
ports:
- port: 3000
targetPort: 3000
nodePort: 31234
selector:
app: foo
all-ingress.yaml
Ingress for both Foo and Bar backend
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: foobar
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: foobar.com
http:
paths:
- path: /foo
pathType: Prefix
backend:
service:
name: foo-nodeport
port:
number: 3000
- path: /bar
pathType: Prefix
backend:
service:
name: bar-nodeport
port:
number: 3000
Additional setup
I also did these:
add 127.0.0.1 foobar.com to /etc/hosts
running minikube tunnel
After that I run curl foobar.com/foo/tea and I get this error:
curl : Cannot GET /
At line:1 char:1
+ curl foobar.com/foo
+ ~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebException
+ FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand
I'm wondering if maybe someone has experienced a similar problem that I did and maybe already had the answer for that. Secondly how to debug the ingress if I meet similar issues?
The codes and manifest could be accessed on this repo
Thank you!
Your problem is here:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
You haven't defined any capture groups in your rules, so this directive is rewriting all requests to /. If a client asks for /foo/tea/darjeeling, you request /. If a client requests /foo/this/does/not/exist, you request /.
To make this work, you need to add appropriate capture groups to your rules:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: foobar
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: foobar.com
http:
paths:
- path: /foo/(.*)
pathType: Prefix
backend:
service:
name: foo-nodeport
port:
number: 3000
- path: /bar/(.*)
pathType: Prefix
backend:
service:
name: bar-nodeport
port:
number: 3000
I suspect you would have found this much easier to debug if your application were to log incoming requests. You might want to look into that. I figured this by adding a sidecar proxy to your pods that would log requests, which made the problem relatively obvious (I could see that every request was for / regardless of what URL the client was using).
There's more documentation about path rewriting when using the Nginx ingress service here.
Unrelated to your question:
You probably want to update your manifests to create Deployment resources instead of ReplicaSets.
For the way you've configured things here you don't need NodePort services.
Related
I've created 2 services using node-express:
webbff-service: All incoming http requests to the express server to pass through this service
identity-service: To handle user auth and other user details
webbff service will pass on the request to identity service if url matches a respective pattern.
webbff-service: server.js file:
const dotEnv = require('dotenv');
dotEnv.config();
dotEnv.config({ path: `.env.${process.env.NODE_ENV}` });
const express = require('express');
const app = express();
app.use('/webbff/test', (req, res) => {
res.json({ message: 'web bff route working OK!' });
});
const axios = require('axios');
app.use('/webbff/api/identity', (req, res) => {
console.log(req.url);
const url = `http://ingress-nginx-controller.ingress-nginx.svc.cluster.local:8000/api/identity${req.url}`;
axios
.get(url)
.then((response) => {
res.json(response.data);
})
.catch((error) => {
console.log(error);
res.status(500).json({ error });
});
});
// Error handler middleware
const errorMiddleware = require('./error/error-middleware');
app.use(errorMiddleware);
const PORT = process.env.SERVER_PORT || 8000;
const startServer = async () => {
app
.listen(PORT, () => {
console.log(`Server started on port ${PORT}`);
})
.on('error', (err) => {
console.error(err);
process.exit();
});
};
startServer();
Deployment file of webbff-service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webbff-service-depl
spec:
replicas: 1
selector:
matchLabels:
app: webbff-service
template:
metadata:
labels:
app: webbff-service
spec:
containers:
- name: webbff-service
image: ajhavery/webbff-service
---
apiVersion: v1
kind: Service
metadata:
name: webbff-service-svc
spec:
selector:
app: webbff-service
ports:
- name: webbff-service
protocol: TCP
port: 8000
targetPort: 8000
Identity service is a simple node express app - which accepts all urls of format: /api/identity
It has a test route - /api/identity/test to test the routes
Deployment file of Identity service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: identity-service-depl
spec:
replicas: 1
selector:
matchLabels:
app: identity-service
template:
metadata:
labels:
app: identity-service
spec:
containers:
- name: identity-service
image: ajhavery/identity-service
---
apiVersion: v1
kind: Service
metadata:
name: identity-service-svc
spec:
selector:
app: identity-service
ports:
- name: identity-service
protocol: TCP
port: 8000
targetPort: 8000
Both these services are deployed on meraretail.dev - a local service which I achieved by modifying /etc/hosts:
127.0.0.1 meraretail.dev
Skaffold.yaml file used for deployment on local kubernetes cluster with 1 node:
apiVersion: skaffold/v2beta28
kind: Config
deploy:
kubectl:
manifests:
- ./kubernetes/*
build:
local:
push: false # don't push images to dockerhub
artifacts:
- image: ajhavery/webbff-service
context: webbff-service
docker:
dockerfile: Dockerfile
sync:
manual:
- dest: .
src: 'src/**/*.js'
- image: ajhavery/identity-service
context: identity-service
docker:
dockerfile: Dockerfile
sync:
manual:
- dest: .
src: 'src/**/*.js'
Routing between files is handled using ingress-nginx service:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-public-svc
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: meraretail.dev
http:
paths:
- path: '/webbff/?(.*)'
pathType: Prefix
backend:
service:
name: webbff-service-svc
port:
number: 8000
- path: '/api/identity/?(.*)'
pathType: Prefix
backend:
service:
name: identity-service-svc
port:
number: 8000
Though, when I try to access the route: https://meraretail.dev/webbff/api/identity/test
In postman, I receive error - BAD gateway
I have working on Microservices with Node JS and React (course from udemy) ,
but unfortunately I'm stuck with kubernetes problem
i have created posts-srv service,
posts-srv.yaml
apiVersion: v1
kind: Service
metadata:
name: posts-srv
spec:
type: NodePort
selector:
app: posts
ports:
- name: posts
protocol: TCP
port: 4000
targetPort: 4000
posts.js (index.js)
const { randomBytes } = require('crypto');
const express = require('express')
const cors = require('cors')
const axios = require('axios')
const app = express();
app.use(express.json())
app.use(cors())
const posts = {};
app.get('/test',(req,res)=>
{
res.status(200).json({
success: true
})
})
app.get('/posts',(req,res)=>
{
res.send(posts)
})
app.post('/posts', async (req,res)=>
{
console.log("app.post('/posts',(req,res)=> ")
const id = randomBytes(4).toString('hex');
const {title } = req.body;
posts[id] = {
id,title
};
await axios.post('htt://localhost:4005/events',{
type:'PostCreated',
data:{
id,title
}
});
res.status(201).send(posts[id])
})
app.post('/events',(req,res)=>
{
console.log("received event",req.body.type);
res.send({});
})
app.listen(4000,()=>{
console.log("server started for posts on 4000");
})
and the docker file is
from node:alpine
WORKDIR /app
COPY package.json ./
RUN npm install
COPY ./ ./
CMD ["npm","start"]
then i used the command kubectl apply -f post-srv.yaml and service was successfully created but that is not accessible to my computer using browser
service details
Name: posts-srv
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=posts
Type: NodePort
IP Families: <none>
IP: 10.108.229.174
IPs: <none>
Port: posts 4001/TCP
TargetPort: 4001/TCP
NodePort: posts 30095/TCP
Endpoints: <none>
External Traffic Policy: Cluster
I have accessed through the localhost:30095
but getting same error, please suggest some solutions
I'm trying to connect to a socket that is inside a container and deployed on Kubernetes.
Locally everything works fine but when deployed it throws an error on connect. I tried with different options but with no success.
Client code
const ENDPOINT = "https://traveling.dev/api/chat"; // this will go to the endpoint of service where socket is running
const chatSocket = io(ENDPOINT, {
rejectUnauthorized: false,
forceNew: true,
secure: false,
});
chatSocket.on("connect_error", (err) => {
console.log(err);
console.log(`connect_error due to ${err.message}`);
});
console.log("CS", chatSocket);
Server code
const app = express();
app.set("trust proxy", true);
app.use(cors());
const server = http.createServer(app);
const io = new Server(server, {
cors: {
origin: "*",
methods: ["*"],
allowedHeaders: ["*"],
},
});
io.on("connection", (socket) => {
console.log("Socket succesfully connected with id: " + socket.id);
});
const start = async () => {
server.listen(3000, () => {
console.log("Started");
});
};
start();
The thing is code is irrelevant here cause locally it all works fine but I posted it anyways.
What can cause this while containerizing it and putting it on Kubernetes?
And the console log just says server error
Error: server error
at Socket.onPacket (socket.js:397)
at XHR.push../node_modules/component-emitter/index.js.Emitter.emit (index.js:145)
at XHR.onPacket (transport.js:107)
at callback (polling.js:98)
at Array.forEach (<anonymous>)
at XHR.onData (polling.js:102)
at Request.push../node_modules/component-emitter/index.js.Emitter.emit (index.js:145)
at Request.onData (polling-xhr.js:232)
at Request.onLoad (polling-xhr.js:283)
at XMLHttpRequest.xhr.onreadystatechange (polling-xhr.js:187)
Does anyone have any suggestions on what may cause this and how to fix it?
Also, any idea on how to get more information about the error would be appreciated.
This is the YAML file that creates the Service and a Pod.
apiVersion: apps/v1
kind: Deployment
metadata:
name: chat-depl
spec:
replicas: 1
selector:
matchLabels:
app: chat
template:
metadata:
labels:
app: chat
spec:
containers:
- name: chat
image: us.gcr.io/forward-emitter-321609/chat-service
---
apiVersion: v1
kind: Service
metadata:
name: chat-srv
spec:
selector:
app: chat
ports:
- name: chat
protocol: TCP
port: 3000
targetPort: 3000
I'm using a loadbalancer on GKE with nginx which IP address is mapped to traveling.dev
This is how my ingress routing service config looks like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Access-Control-Allow-Origin: $http_origin";
spec:
rules:
- host: traveling.dev
http:
paths:
- path: /api/chat/?(.*)
backend:
serviceName: chat-srv
servicePort: 3000
- path: /?(.*)
backend:
serviceName: client-srv
servicePort: 3000
Thanks!
Nginx Ingress by default support WebSocket proxying, but you need to configure it.
For this you need to add annotation custom configuration snippet.
You can refer this already answered stackoverflow question.
Nginx ingress controller websocket support
Apologies in advance for such a long question, I just want to make sure I cover everything...
I have a react application that is supposed to connect to a socket being run in a service that I have deployed to kubernetes. The service runs and works fine. I am able to make requests without any issue but I cannot connect to the websocket running in the same service.
I am able to connect to the websocket when I run the service locally and use the locahost uri.
My express service's server.ts file looks like:
import "dotenv/config";
import * as packageJson from "./package.json"
import service from "./lib/service";
const io = require("socket.io");
const PORT = process.env.PORT;
const server = service.listen(PORT, () => {
console.info(`Server up and running on ${PORT}...`);
console.info(`Environment = ${process.env.NODE_ENV}...`);
console.info(`Service Version = ${packageJson.version}...`);
});
export const socket = io(server, {
cors: {
origin: process.env.ACCESS_CONTROL_ALLOW_ORIGIN,
methods: ["GET", "POST"]
}
});
socket.on('connection', function(skt) {
console.log('User Socket Connected');
socket.on("disconnect", () => console.log(`${skt.id} User disconnected.`));
});
export default service;
When I run this, PORT is set to 8088, and access-control-allow-origin is set to *. And note that I'm using a rabbitmq cluster that is deployed to Kubernetes, it is the same uri for the rabbit connection when I run locally. Rabbitmq is NOT running on my local machine, so I know it's not an issue with my rabbit deployment, it has to be something I'm doing wrong in connecting to the socket.
When I run the service locally, I'm able to connect in the react application with the following:
const io = require("socket.io-client");
const socket = io("ws://localhost:8088", { path: "/socket.io" });
And I see the "User Socket Connected" message and it all works as I expect.
When I deploy the service to Kubernetes though, I'm having some issues figuring out how to connect to the socket.
My Kubernetes Service:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8088
selector:
app: my-service
My deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service
spec:
replicas: 2
selector:
matchLabels:
app: my-service
template:
metadata:
labels:
app: my-service
spec:
containers:
- name: project
image: my-private-registry.com
ports:
- containerPort: 8088
imagePullSecrets:
- name: mySecret
And finally, my ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-service-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/enable-cors: "true" // Just added this to see if it helped
nginx.ingress.kubernetes.io/cors-allow-origin: "*" // Just added this to see if it helped
nginx.ingress.kubernetes.io/cors-allow-methods: PUT, GET, POST, OPTIONS, DELETE, PATCH // Just added this to see if it helped
spec:
tls:
- hosts:
- my.host.com
secretName: my-service-tls
rules:
- host: "my.host.com"
http:
paths:
- pathType: Prefix
path: "/project"
backend:
service:
name: my-service
port:
number: 80
I can connect to the service fine and get data, post data, etc. but I cannot connect to the websocket, I get either 404 or cors errors.
Since the service is running on my.host.com/project, I assume that the socket is at the same uri. So I try to connect with:
const socket = io("ws://my.host.com", { path: "/project/socket.io" });
and also using wss://
const socket = io("wss://my.host.com", { path: "/project/socket.io" });
and I have an error being logged in the console:
socket.on("connect_error", (err) => {
console.log(`connect_error due to ${err.message}`);
});
both result in
polling-xhr.js?d33e:198 GET https://my.host.com/project/?EIO=4&transport=polling&t=NjWQ8Tc 404
websocket.ts?25e3:14 connect_error due to xhr poll error
I have tried all of the following and none of them work:
const socket = io("ws://my.host.com", { path: "/socket.io" });
const socket = io("wss://my.host.com", { path: "/socket.io" });
const socket = io("ws://my.host.com", { path: "/project" });
const socket = io("wss://my.host.com", { path: "/project" });
const socket = io("ws://my.host.com", { path: "/" });
const socket = io("wss://my.host.com", { path: "/" });
const socket = io("ws://my.host.com");
const socket = io("wss://my.host.com");
Again, this works when the service is run locally, so I must have missed something and any help would be extremely appreciated.
Is there a way to go on the Kubernetes pod and find where rabbit is being broadcast to?
In case somebody stumbles on this in the future and wants to know how to fix it, it turns out it was a really dumb mistake on my part..
In:
export const socket = io(server, {
cors: {
origin: process.env.ACCESS_CONTROL_ALLOW_ORIGIN,
methods: ["GET", "POST"]
},
});
I just needed to add path: "/project/socket.io" to the socket options, which makes sense.
And then if anybody happens to run into the issue that followed, I was getting a 400 error on the post to the websocket polling so I set transports: [ "websocket" ] in my socket.io-client options and that seemed to fix it. The socket is now working and I can finally move on!
I have a kubernetes cluster and an express server serving a SPA. Currently if I hit the http version of my website I do not redirect to https, but I would like to.
This is what I've tried -
import express from "express";
const PORT = 3000;
const path = require("path");
const app = express();
const router = express.Router();
const forceHttps = function(req, res, next) {
const xfp =
req.headers["X-Forwarded-Proto"] || req.headers["x-forwarded-proto"];
if (xfp === "http") {
console.log("host name");
console.log(req.hostname);
console.log(req.url);
const redirectTo = `https:\/\/${req.hostname}${req.url}`;
res.redirect(301, redirectTo);
} else {
next();
}
};
app.get("/*", forceHttps);
// root (/) should always serve our server rendered page
// other static resources should be served as they are
const root = path.resolve(__dirname, "..", "build");
app.use(express.static(root, { maxAge: "30d" }));
app.get("/*", function(req, res, next) {
if (
req.method === "GET" &&
req.accepts("html") &&
!req.is("json") &&
!req.path.includes(".")
) {
res.sendFile("index.html", { root });
} else {
next();
}
});
// tell the app to use the above rules
app.use(router);
app.listen(PORT, error => {
console.log(`listening on ${PORT} from the server`);
if (error) {
console.log(error);
}
});
This is what my kubernetes config looks like
apiVersion: v1
kind: Service
metadata:
name: <NAME>
labels:
app: <APP>
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <CERT>
# Only run SSL on the port named "https" below.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
type: LoadBalancer
selector:
app: <APP>
ports:
- port: 443
targetPort: 3000
protocol: TCP
name: https
- port: 80
targetPort: 3000
protocol: TCP
name: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: <DEPLOYMENT_NAME>
labels:
app: <APP>
spec:
replicas: 1
selector:
matchLabels:
app: <APP>
template:
metadata:
labels:
app: <APP>
spec:
containers:
- name: <CONTAINER_NAME>
image: DOCKER_IMAGE_NAME
imagePullPolicy: Always
env:
- name: VERSION_INFO
value: "1.0"
- name: BUILD_DATE
value: "1.0"
ports:
- containerPort: 3000
I successfully hit the redirect...but the browser does not actually redirect. How do I get it to redirect from http to https?
Relatedly, from googling around I keep seeing that people are using an Ingress AND a Load Balancer - why would I need both?
When you LoadBalancer type service in EKS it will either create a classic or a network load balancer.None of these support http to https redirection. You need an application load balancer(ALB) which supports http to https redirection.In EKS to use ALB you need to use AWS ALB ingress controller. Once you have ingress controller setup you can use annotation to redirect http to https
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: default
name: ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:xxxx:certificate/xxxxxx
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}