502 Bad gateway on Nodejs application deployed on Kubernetes cluster - node.js

I am deploying nodejs application on kubernetes, After deployment pod is up and running, but when I am trying to access the application through ingress it is giving 502 bad gateway error.
Dockerfile
FROM node:14
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 3123
CMD [ "node", "index.js" ]
Deployment.yaml
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "node-development"
namespace: "development"
spec:
selector:
matchLabels:
app: "node-development"
replicas: 1
template:
metadata:
labels:
app: "node-development"
spec:
containers:
-
name: "node-development"
image: "xxx"
imagePullPolicy: "Always"
env:
-
name: "NODE_ENV"
value: "development"
ports:
-
containerPort: 47033
service.yaml
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "node-development-service"
namespace: "development"
labels:
app: "node-development"
spec:
ports:
-
port: 47033
targetPort: 3123
selector:
app: "node-development"
ingress.yaml
---
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata:
name: "node-development-ingress"
namespace: "development"
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/$1"
spec:
rules:
-
host: "xxxx"
http:
paths:
-
backend:
service:
name: "node-development"
port:
number: 47033
path: "/node-development/(.*)"
pathType: "ImplementationSpecific"
With ingress or even with the pod cluster ip I am not being able to access application it is throwing 502 bad gateway nginx

Issue got resolved, I am using SSL in my application as a result it was not re-directing with the given ingress url.
Need to add below annotation in ingress.yaml file.
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"

Related

I am trying to run my microservices with kubernetes ingress and nginx is responding with service temoprarily not working

This is my ingress yaml file
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
namespace: ingress-nginx
spec:
ingressClassName: nginx
rules:
- host: ticketing.dev
http:
paths:
- pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 5000
path: /
when ever I go to ticketing.dev it shows
As all of the services are working as expected
**All of the pods are also working just fine"
Following is my Service and Deployment yaml code
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: 9862672975/auth
env:
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 5000
targetPort: 5000
I am trying build microservices with Nodejs and Nextjs. As I try to add both frontend and backend to ingress it did not respond, and I tried removing frontend and just running backend with following code it is not working.
You have not specified a path in your ingress-file, only a pathType.
Below paths you want to add path: "/".
If you look at the Ingress reference, you may see that the path field is not marked as "required", but with this note:
Paths must begin with a '/' and must be present when using PathType with value "Exact" or "Prefix".
Since you have specified your pathType as "Prefix", you need to include the path. In general, I would advise explicitly specifying both path and prefix whenever possible, rather than relying on defaults.

Path based routing with Nginx controller and gunicorn not working on AKS

I want to deploy my Flask app with Python 3.8, Gunicorn 20.1.0 and Nginx controller on AKS.
On AKS the Ingress controller nginx-ingress 1.0.4 was installed, using the official Azure tutorial: https://learn.microsoft.com/en-us/azure/aks/ingress-internal-ip?tabs=azure-cli
I used the following deployment yaml to deploy myapp
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: mycr.azurecr.io/myapp:latest
imagePullPolicy: Always
ports:
- containerPort: 80
command: [ "bash" ]
args: ["start_app.sh"]
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
type: ClusterIP
ports:
- port: 80
selector:
app: myapp
The start_app.sh starts my Flask app using
gunicorn --timeout 90 --bind 0.0.0.0:80 --workers 4 "app.main:create_app()"
The ingress route is configured as follows: (I tried several variants with rewrite-target using / or /$1)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: aks-stage-ingress
namespace: stage
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- http:
paths:
- path: /myapp(/|$)(.*)
pathType: Prefix
backend:
service:
name: myapp
port:
number: 80
The ingress route works fine if do not use just the root path / instead of /myapp. Then myapp finds all necessary files.
With the Nginx Ingress controller and the path /myapp myapp is loading, but it cannot find the static files under static and the URLS from the app do not include the relative path /myapp.
Example
The Login link in my app is pointing to
myhost.com/login
instead of
myhost.com/myapp/login
Similarly to the problem described here https://github.com/kubernetes/ingress-nginx/issues/7992.
Did anyone of you have the same problem and found a solution?
Thanks in advance.

Unable to access deployed Angular application on Google cloud

I have developed my project on Google cloud using Nodejs,Angular, MongoDB and Express. I have successfully built the Authentication part for Express and Node.js. Now I am trying to integrate Angular. I have setup Ingress-NGINX using Google cloud and am utilizing Google cloud shell to create the code.
I followed the steps below for setup
Steps for setting up Ingress-NGINX on Google Cloud
Create a project blog-dev
Create cluster blog-dev with 3 N1-g1 small instances in us-central1-c zone
Navigate to https://kubernetes.github.io/ingress-nginx/deploy/#gce-gke
On the Google cloud account, open the cloud shell and navigate to BPB_MEAN_Framework directory in terminal
Execute the command
gcloud init, reinitialize the cluster, select the account, project and the region
Execute the command gcloud container clusters get-credentials blog-dev
Execute the command kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.3/deploy/static/provider/cloud/deploy.yaml to configure ingress-nginx
Go to Network Services -> Load Balancing and check that the Load Balancer has got created. Note the ip of the Load Balancer
Open the hosts.ini file and update as shown below
130.211.113.34 blog.dev
8.1)
kubectl create secret generic jwt-secret --from-literal=JWT_KEY=asdf
Run scaffold dev
Go to http://blog.dev/api/users/currentuser in a browser and get the 'Privacy Error' page. Click 'Advanced' here
Type thisisunsafe on keyboard
The various files are listed below
Listed below is the Kubernetes deployment yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-depl
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: us.gcr.io/blog-dev-326403/client:project-latest
---
apiVersion: v1
kind: Service
metadata:
name: client-srv
spec:
selector:
app: client
ports:
- name: client
protocol: TCP
port: 4200
targetPort: 4200
Listed below is the Ingress service
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: blog.dev
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: auth-srv
servicePort: 3100
- path: /?(.*)
backend:
serviceName: client-srv
servicePort: 4200
Listed below is the Skaffold yaml
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
#local:
# push: false
googleCloudBuild:
projectId: blog-dev-326403
artifacts:
- image: us.gcr.io/blog-dev-326403/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/.ts'
dest: .
- image: us.gcr.io/blog-dev-326403/client
context: client
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/.ts'
dest: .
The angular folder structure is shown below
Angular project folder structure
I added the Google cloud Load Balancer ip followed by blog.dev in hosts.ini file.
When I run skaffold dev, there are no errors. When I try to access blog.dev, I get a 502 bad gateway.
When I navigate to client directory and type npm start and access preview in Google cloud shell, I get my website as shown
Application in preview mode in Google cloud
Please help...This is a showstopper for me
Try to open the container port from deployment
ports:
- containerPort: 80
deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-depl
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: us.gcr.io/blog-dev-326403/client:project-latest
ports:
- containerPort: 4200
I found out the answer myself. Here is my solution
Package.json
The package.json entry is given below
"prod": "ng build --prod",
Angular Dockerfile
FROM node:16.4.0 as build
WORKDIR /app
COPY package.json ./
RUN npm install
COPY . .
RUN npm run prod
FROM nginx:1.19
COPY ./nginx/nginx.conf /etc/nginx/nginx.conf
COPY --from=build /app/dist/Angular-Mean/ /usr/share/nginx/html
I put an nginx configuration file as below
worker_processes 1;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html index.htm;
include /etc/nginx/mime.types;
gzip on;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript
application/x-javascript text/xml application/xml application/xml+rss
text/javascript;
location / {
try_files $uri $uri/ /index.html;
}
}
}
The deployment file is shown below
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-depl
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: us.gcr.io/blog-dev-326403/client:project-latest
ports:
- containerPort: 4200
---
apiVersion: v1
kind: Service
metadata:
name: client-srv
spec:
selector:
app: client
type: LoadBalancer
ports:
- name: client
protocol: TCP
port: 80
targetPort: 80
nodePort: 31000
The final artifact is ingress service
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: blog.dev
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: auth-srv
servicePort: 3100
- path: /?(.*)
backend:
serviceName: client-srv
servicePort: 80
Rest of the configuration remains same. Please use this configuration and navigate to https://blog.dev and you get the output as shownBlog Website

I am trying to deploy my docker image from ACR to AKS. The pods are getting created properly but getting ERR_CONNECTION_TIMED_OUT through external IP

The same deployment and service yaml files are working properly when I am using a standard image from docker like nginx and set it's containerPort to default port of nginx i.e. 80 but when I am changing it's container port to 8080 then also I am getting the same issue.
My deployment.yaml file -
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-test-deployment
labels:
app: my-test-app
spec:
replicas: 1
selector:
matchLabels:
app: my-test-app
template:
metadata:
labels:
app: my-test-app
spec:
containers:
- name: my-test-container
image: javapoccr.azurecr.io/sushant-saurav/my-nest-app-with-docker
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
imagePullSecrets:
- name: acr-details
My service.yaml -
apiVersion: v1
kind: Service
metadata:
name: my-test-service
labels:
app: my-test-app
spec:
selector:
app: my-test-app
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: 8080
There are two quick things that I would check/verify:
Is the test app configured to listen on 8080? The containerPort/targetPort should match what the app is configured to listen on.
Ensure that you have the most recent image. Without a tag, you are using :latest. But if you update that, the imagePullPolicy will not pull the new image, if it has an older one. I'd recommend changing the imagePullPolicy to Always
-Dave

Pods do not resolve the domain names of a service through ingress

I have a problem that my pods in minikube cluster are not able to see the service through the domain name.
to run my minikube i use the following commands (running on windows 10):
minikube start --vm-driver hyperv;
minikube addons enable kube-dns;
minikube addons enable ingress;
This is my deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: hello-world
name: hello-world
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: hello-world
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello-world
spec:
containers:
- image: karthequian/helloworld:latest
imagePullPolicy: Always
name: hello-world
ports:
- containerPort: 80
protocol: TCP
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
this is the service.yaml:
apiVersion: v1
kind: Service
metadata:
labels:
run: hello-world
name: hello-world
namespace: default
selfLink: /api/v1/namespaces/default/services/hello-world
spec:
ports:
- nodePort: 31595
port: 80
protocol: TCP
targetPort: 80
selector:
run: hello-world
sessionAffinity: None
type: ExternalName
externalName: minikube.local.com
status:
loadBalancer: {}
this is my ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: minikube-local-ingress
spec:
rules:
- host: minikube.local.com
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: 80
So, if i go inside the hello-world pod and from /bin/bash will run curl minikube.local.com or nslookup minikube.local.com.
So how can i make sure that the pods can resolve the DNS name of the service?
I know i can specify hostAlias in the deployment definition, but is there an automatic way tht will allow to update the DNS of kubernetes?
So, you want to expose your app on Minikube? I've just tried it using the default ClusterIP service type (essentially, removing the ExternalName stuff you had) and with this YAML file I can see your service on https://192.168.99.100 where the Ingress controller lives:
The service now looks like so:
apiVersion: v1
kind: Service
metadata:
labels:
run: hello-world
name: hello-world
spec:
ports:
- port: 80
targetPort: 80
selector:
run: hello-world
And the ingress is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: minikube-local-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: 80
Note: Within the cluster your service is now available via hello-world.default (that's the DNS name assigned by Kubernetes within the cluster) and from the outside you'd need to map, say hello-world.local to 192.168.99.100 in your /etc/hosts file on your host machine.
Alternatively, if you change the Ingress resource to - host: hello-world.local then you can (from the host) reach your service using this FQDN like so: curl -H "Host: hello-world.local" 192.168.99.100.

Resources