Kubernetes Ingress 502 Bad Gateway Connection Refused - node.js

I'm trying to access my Angular front-end deployed to an AKS cluster at Path / with service lawyerlyui-service. The cluster is using nginx deployed via HELM with the official chart (https://kubernetes.github.io/ingress-nginx)
I have other backend .net core services deployed and I can access these via the ingress.
However when i try access the Angular application at https://uat.redactedapp.co.za i get the following error (taken from nginx pod logs)
Below are the configurations and log for nginx, Dockerfile, deployment.yml and ingress.yml
NGINX Log
2021/01/29 20:31:59 [error] 1304#1304: *7634340 connect() failed (111: Connection refused) while connecting to upstream, client: 10.244.0.1,
server: uat.redactedapp.co.za, request: "GET / HTTP/2.0", upstream: "http://10.244.0.88:80/", host: "uat.redactedapp.co.za"
Dockerfile
FROM node:8.12.0-alpine
EXPOSE 80
RUN npm -v
RUN mkdir -p /usr/src
WORKDIR /usr/src
# To handle 'not get uid/gid'
RUN npm config set unsafe-perm true
RUN npm install -g \
typescript#2.8.3 \
#angular/compiler-cli \
#angular-devkit/core
RUN npm install -g #angular/cli#7.0.3
RUN ln -s /usr/src/node_modules/#angular/cli/bin/ng /bin/ng
COPY package.json /usr/src/
RUN npm install
COPY . /usr/src
CMD ["ng", "build", "--configuration", "uat"]
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: redacted-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, PUT, POST, DELETE, PATCH, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 50m
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- hosts:
- uat.redactedapp.co.za
secretName: secret-tls
rules:
- host: uat.redactedapp.co.za
http:
paths:
- path: /otp-api(/|$)(.*)
pathType: Prefix
backend:
service:
name: otpapi-service
port:
number: 80
- path: /search-api(/|$)(.*)
pathType: Prefix
backend:
service:
name: searchapi-service
port:
number: 80
- path: /notifications-api(/|$)(.*)
pathType: Prefix
backend:
service:
name: notificationsapi-service
port:
number: 80
- path: /user-api(/|$)(.*)
pathType: Prefix
backend:
service:
name: userapi-service
port:
number: 80
- path: /insurance-api(/|$)(.*)
pathType: Prefix
backend:
service:
name: insuranceapi-service
port:
number: 80
- path: /client-api(/|$)(.*)
pathType: Prefix
backend:
service:
name: clientsapi-service
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: lawyerlyui-service
port:
number: 80
Deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: lawyerlyui
spec:
selector:
matchLabels:
app: lawyerlyui
replicas: 1
template:
metadata:
labels:
app: lawyerlyui
spec:
containers:
- name: lawyerlyui
image: redacted.azurecr.io/lawyerly:latest
ports:
- containerPort: 80
imagePullSecrets:
- name: uat-acr-auth
---
apiVersion: v1
kind: Service
metadata:
name: lawyerlyui-service
spec:
type: ClusterIP
selector:
app: lawyerlyui
ports:
- name: http
protocol: TCP
# Port accessible inside cluster
port: 80
# Port to forward to inside the pod
targetPort: 80

So after doing so more reading an looking at #mdaniels comments I've created a new Dockerfile.uat. The previous file was only building the src code but never serving it.
As I understand, you need to serve the built Angular code, this is no longer giving me a 502 Bad Gateway.
# Stage 0, "build-stage", based on Node.js, to build and compile the frontend
FROM node:10.8.0 as build-stage
WORKDIR /app
COPY package*.json /app/
RUN npm install
COPY ./ /app/
ARG configuration=uat
EXPOSE 80
RUN npm run build --configuration $configuration
# Stage 1, based on Nginx, to have only the compiled app, ready for production with Nginx
FROM nginx:1.15
#Copy ci-dashboard-dist
COPY --from=build-stage /app/dist/lfa/ /usr/share/nginx/html
#Copy default nginx configuration
COPY ./nginx/nginx-custom.conf /etc/nginx/conf.d/default.conf

Related

Containerized application not accessible from browser

I am using Azure Kubernetes cluster and using below as dockerfile.The container is deployed successfully in a Pod.
FROM node:12 as build-stage
WORKDIR /app
COPY package.json /app/
COPY package-lock.json /app/
RUN npm install
COPY ./ /app/
ARG URI
ENV REACT_APP_URI=$URI
RUN npm run build
EXPOSE 80
CMD exec /bin/bash -c "trap : TERM INT; sleep infinity & wait"
deployment yml file:
kind: Deployment
apiVersion: apps/v1
metadata:
name: m-app
spec:
replicas: 1
selector:
matchLabels:
app: m-app
template:
metadata:
labels:
app: m-app
spec:
containers:
- name: metadata-app
image: >-
<url>
imagePullPolicy: Always
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: dockersecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: dockersecret
key: password
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: m-app
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
selector:
app: m-app
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
I would use the above yml file for the deployment and I want to access the app via pvt IP Address . By running the above yml I would get the service m-app with an External private IP but it is not accessible.
Then I tried with NodePort and for the same I replace above LoadBalancer snippet with below:
kind: Service
apiVersion: v1
metadata:
name: m-app
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
nodePort: 31000
selector:
app: m-app
Again I can not access the app from my browser with :
Could someone please assist. I suspected an issue with the Dockerfile as well and used different Dockerfile but no luck.(Please ignore yml indentations if any)
Finally the issue got fixed . I added below snippet in the Dockerfile:
FROM httpd:alpine
WORKDIR /var/www/html
COPY ./httpd.conf /usr/local/apache2/conf/httpd.conf
COPY --from=build-stage /app/build/ .
along with:
FROM node:12 as build-stage
WORKDIR /app
COPY package.json /app/
COPY package-lock.json /app/
RUN npm install
COPY ./ /app/
ARG URI
ENV REACT_APP_URI=$URI
RUN npm run build

Unable to access deployed Angular application on Google cloud

I have developed my project on Google cloud using Nodejs,Angular, MongoDB and Express. I have successfully built the Authentication part for Express and Node.js. Now I am trying to integrate Angular. I have setup Ingress-NGINX using Google cloud and am utilizing Google cloud shell to create the code.
I followed the steps below for setup
Steps for setting up Ingress-NGINX on Google Cloud
Create a project blog-dev
Create cluster blog-dev with 3 N1-g1 small instances in us-central1-c zone
Navigate to https://kubernetes.github.io/ingress-nginx/deploy/#gce-gke
On the Google cloud account, open the cloud shell and navigate to BPB_MEAN_Framework directory in terminal
Execute the command
gcloud init, reinitialize the cluster, select the account, project and the region
Execute the command gcloud container clusters get-credentials blog-dev
Execute the command kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.3/deploy/static/provider/cloud/deploy.yaml to configure ingress-nginx
Go to Network Services -> Load Balancing and check that the Load Balancer has got created. Note the ip of the Load Balancer
Open the hosts.ini file and update as shown below
130.211.113.34 blog.dev
8.1)
kubectl create secret generic jwt-secret --from-literal=JWT_KEY=asdf
Run scaffold dev
Go to http://blog.dev/api/users/currentuser in a browser and get the 'Privacy Error' page. Click 'Advanced' here
Type thisisunsafe on keyboard
The various files are listed below
Listed below is the Kubernetes deployment yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-depl
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: us.gcr.io/blog-dev-326403/client:project-latest
---
apiVersion: v1
kind: Service
metadata:
name: client-srv
spec:
selector:
app: client
ports:
- name: client
protocol: TCP
port: 4200
targetPort: 4200
Listed below is the Ingress service
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: blog.dev
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: auth-srv
servicePort: 3100
- path: /?(.*)
backend:
serviceName: client-srv
servicePort: 4200
Listed below is the Skaffold yaml
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
#local:
# push: false
googleCloudBuild:
projectId: blog-dev-326403
artifacts:
- image: us.gcr.io/blog-dev-326403/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/.ts'
dest: .
- image: us.gcr.io/blog-dev-326403/client
context: client
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/.ts'
dest: .
The angular folder structure is shown below
Angular project folder structure
I added the Google cloud Load Balancer ip followed by blog.dev in hosts.ini file.
When I run skaffold dev, there are no errors. When I try to access blog.dev, I get a 502 bad gateway.
When I navigate to client directory and type npm start and access preview in Google cloud shell, I get my website as shown
Application in preview mode in Google cloud
Please help...This is a showstopper for me
Try to open the container port from deployment
ports:
- containerPort: 80
deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-depl
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: us.gcr.io/blog-dev-326403/client:project-latest
ports:
- containerPort: 4200
I found out the answer myself. Here is my solution
Package.json
The package.json entry is given below
"prod": "ng build --prod",
Angular Dockerfile
FROM node:16.4.0 as build
WORKDIR /app
COPY package.json ./
RUN npm install
COPY . .
RUN npm run prod
FROM nginx:1.19
COPY ./nginx/nginx.conf /etc/nginx/nginx.conf
COPY --from=build /app/dist/Angular-Mean/ /usr/share/nginx/html
I put an nginx configuration file as below
worker_processes 1;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html index.htm;
include /etc/nginx/mime.types;
gzip on;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript
application/x-javascript text/xml application/xml application/xml+rss
text/javascript;
location / {
try_files $uri $uri/ /index.html;
}
}
}
The deployment file is shown below
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-depl
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: us.gcr.io/blog-dev-326403/client:project-latest
ports:
- containerPort: 4200
---
apiVersion: v1
kind: Service
metadata:
name: client-srv
spec:
selector:
app: client
type: LoadBalancer
ports:
- name: client
protocol: TCP
port: 80
targetPort: 80
nodePort: 31000
The final artifact is ingress service
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: blog.dev
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: auth-srv
servicePort: 3100
- path: /?(.*)
backend:
serviceName: client-srv
servicePort: 80
Rest of the configuration remains same. Please use this configuration and navigate to https://blog.dev and you get the output as shownBlog Website

502 Bad gateway on Nodejs application deployed on Kubernetes cluster

I am deploying nodejs application on kubernetes, After deployment pod is up and running, but when I am trying to access the application through ingress it is giving 502 bad gateway error.
Dockerfile
FROM node:14
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 3123
CMD [ "node", "index.js" ]
Deployment.yaml
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "node-development"
namespace: "development"
spec:
selector:
matchLabels:
app: "node-development"
replicas: 1
template:
metadata:
labels:
app: "node-development"
spec:
containers:
-
name: "node-development"
image: "xxx"
imagePullPolicy: "Always"
env:
-
name: "NODE_ENV"
value: "development"
ports:
-
containerPort: 47033
service.yaml
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "node-development-service"
namespace: "development"
labels:
app: "node-development"
spec:
ports:
-
port: 47033
targetPort: 3123
selector:
app: "node-development"
ingress.yaml
---
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata:
name: "node-development-ingress"
namespace: "development"
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/$1"
spec:
rules:
-
host: "xxxx"
http:
paths:
-
backend:
service:
name: "node-development"
port:
number: 47033
path: "/node-development/(.*)"
pathType: "ImplementationSpecific"
With ingress or even with the pod cluster ip I am not being able to access application it is throwing 502 bad gateway nginx
Issue got resolved, I am using SSL in my application as a result it was not re-directing with the given ingress url.
Need to add below annotation in ingress.yaml file.
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"

React App in Kubernetes is not connecting properly

I'm new to Kubernetes and I'm trying to deploy a React app to my cluster. Here's the basic info:
Docker Desktop, single-node Kubernetes cluster
React development frontend, exposing port 3000
Node.js/Express backend, exposing port 8080
NGINX Ingress Controller, serving my React frontend on "localhost:3000" and routing my Fetch API requests (fetch("localhost:3000/api/...", OPTIONS)) to the backend (which works)
I am having an issue when opening the React app. The Ingress Controller correctly routes to the app but the 3 bundles (bundle.js, main.chink.js, the third one which I don't remember) aren't loaded. I get the following error:
GET http://localhost/static/js/main.chunk.js net::ERR_ABORTED 404 (Not Found) ingress (one example)
I understand why this error happens. The Ingress Controller correctly routes the traffic but only loads the index.html file. In this file, there are calls to 3 scripts (referring to the bundles) which aren't loaded. I understand the error, the files don't get sent to the browser so the index.html file can't load them in, but do not know how to fix it.
Does anyone have any suggestions? de and then pulled from Docker Hub. Does anybody know what a possible solution could be? For example, does deploying the build/ folder (built React app using "npm run build") fix this? Do I have to use nginx inside my Dockerfile to build the container?
Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: titanic-ingress
#labels:
#name: titanic-ingress
spec:
ingressClassName: nginx
rules:
- host: localhost
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: titanicfrontendservice
port:
number: 3000
- path: /api
pathType: Exact
backend:
service:
name: titanicbackendservice
port:
number: 8080
Ingress controller deployment yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress
namespace: nginx-ingress
spec:
replicas: 1
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
#annotations:
#prometheus.io/scrape: "true"
#prometheus.io/port: "9113"
spec:
serviceAccountName: nginx-ingress
containers:
- image: nginx/nginx-ingress:1.10.0
imagePullPolicy: IfNotPresent
name: nginx-ingress
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: readiness-port
containerPort: 8081
#- name: prometheus
#containerPort: 9113
readinessProbe:
httpGet:
path: /nginx-ready
port: readiness-port
periodSeconds: 1
securityContext:
allowPrivilegeEscalation: true
runAsUser: 101 #nginx
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
- -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
#- -v=3 # Enables extensive logging. Useful for troubleshooting.
- -report-ingress-status
- -external-service=nginx-ingress
#- -enable-prometheus-metrics
#- -global-configuration=$(POD_NAMESPACE)/nginx-configuration
Ingress controller service yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: nginx-ingress
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ports:
- port: 3000
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
app: nginx-ingress
TL;DR
Switch your pathType in both /api and / path to Prefix.
I've included some explanation with fixed Ingress resource below.
For the reproduction purposes I used the titanic manifests that you provided in the another question:
Github.com: Strobosco: Titanicfullstack
The issue with your configuration is with: pathType.
Using your Ingress resource with pathType: Exact showed me blank page.
Modifying your Ingress resource with pathType: Prefix solved the issue.
Side note!
The message: "Would you have survived the sinking of the Titanic?" showed.
The exact Ingress configuration should be following:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: titanic-ingress
spec:
ingressClassName: nginx
rules:
- host: localhost
http:
paths:
- path: /
pathType: Prefix # <-- IMPORTANT
backend:
service:
name: titanicfrontendservice
port:
number: 3000
- path: /api
pathType: Prefix # <-- IMPORTANT
backend:
service:
name: titanicbackendservice
port:
number: 8080
Why I think it happened?
Citing the official documentation:
Path types
Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. There are three supported path types:
ImplementationSpecific: With this path type, matching is up to the IngressClass. Implementations can treat this as a separate pathType or treat it identically to Prefix or Exact path types.
Exact: Matches the URL path exactly and with case sensitivity.
Prefix: Matches based on a URL path prefix split by /. Matching is case sensitive and done on a path element by element basis. A path element refers to the list of labels in the path split by the / separator. A request is a match for path p if every p is an element-wise prefix of p of the request path.
-- Kubernetes.io: Docs: Concepts: Services networking: Ingress: Path types (there are some examples on how the path matching is handled)
Ingress controller is forced to match only the / path leaving rest of the dependencies (apart from the index.html) on other paths like /super.jpg and /folder/awesome.jpg to error with 404 code.
Side note!
You can test yourself this behavior by spawning an nginx Pod and placing example files in it. After applying the Ingress resource with / and pathType: Exact you won't be able to request it through the Ingress controller but you could access them within the cluster.
I encourage you to check the additional resources:
Kubernetes.io: Docs: Concepts: Services networking: Ingress
Kubernetes.github.io: Ingress nginx: User guide: Ingress path matching
The issue with your ingress.yaml is that the route for your ui should be /* and placed below the backend routing. Also, check your routing for APIs
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: titanic-ingress
#labels:
#name: titanic-ingress
spec:
ingressClassName: nginx
rules:
- host: localhost
http:
paths:
- path: /api/*
pathType: Exact
backend:
service:
name: titanicbackendservice
port:
number: 8080
- path: /*
pathType: Exact
backend:
service:
name: titanicfrontendservice
port:
number: 3000

Getting 502 Bad Gateway nginx/1.13.9 for my angular app in k8's

I am getting 502 Bad Gateway nginx/1.13.9 for my angular app when accessing in the browser in k8's. My service and ingress is as below.
Angular app pod logs show all successful and in-fact port forwarding is working fine. This same image is also working fine in my local machine with docker.
From k8s logs, I could see this:
[error] 1534#1534: *32272457 SSL_do_handshake() failed (SSL:
error:1408F10B:SSL routines:ssl3_get_record:wrong version number)
while SSL handshaking to upstream
Service:
Name: test-portal
Namespace: testproject
Labels: app=test-portal
chart=test-portal-1.0.0
environment=dev
heritage=Tiller
release=test-portal
version=dev
Annotations: <none>
Selector: app=test-portal,release=test-portal
Type: ClusterIP
IP: x.x.x.x
Port: <unset> 80/TCP
TargetPort: 4200/TCP
Endpoints: x.x.x.x:4200
Session Affinity: None
Events: <none>
Ingress:
Name: test-portal
Namespace: testproject
Address:
Default backend: default-http-backend:80 (<none>)
TLS:
SNI routes test-portal.us-west-2.xxxxx.xxxxxx.delivery
Rules:
Host Path Backends
---- ---- --------
test-portal.us-west-2.xxxxx.xxxxxx.delivery
/ test-portal:80 (<none>)
Annotations:
secure-backends: true
ssl-redirect: true
Events: <none>
Ingress Config map
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/secure-backends: "false"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"ingress.kubernetes.io/secure-backends":"false","kubernetes.io/ingress.class":"nginx"},"labels":{"app":"test-portal","chart":"test-portal-1.0.0","environment":"dev","heritage":"test-portal","release":"Helm","version":"1.0.0"},"name":"test-portal","namespace":"testproject"},"spec":{"rules":[{"host":"test-portal.us-west-2.xxxx.xxxxxxxxx.delivery","http":{"paths":[{"backend":{"serviceName":"test-portal","servicePort":80},"path":"/"}]}}],"tls":[{"hosts":["test-portal.us-west-2.xxxx.xxxxxxxxx.delivery"]}]}}
kubernetes.io/ingress.class: nginx
creationTimestamp: 2020-01-08T11:24:18Z
generation: 1
labels:
app: test-portal
chart: test-portal-1.0.0
environment: dev
heritage: test-portal
release: Helm
version: 1.0.0
name: test-portal
namespace: testproject
resourceVersion: "2379156945"
selfLink: /apis/extensions/v1beta1/namespaces/testproject/ingresses/test-portal
uid: 6925819b-3209-11ea-80fb-02fb0c9060d8
spec:
rules:
- host: test-portal.us-west-2.xxxx.xxxxxxxxx.delivery
http:
paths:
- backend:
serviceName: test-portal
servicePort: 80
path: /
tls:
- hosts:
- test-portal.us-west-2.xxxx.xxxxxxxxx.delivery
status:
loadBalancer:
ingress:
- {}
You seem to be running the angular dev server on port 4200 in your pod.
The angular app is served using http, not https, therefore you must configure the ingress to not use https (secure-backends: false) for backend communication.
Besides, the angular dev server should not be used for prod serving. Build a container image with the angular prod build to benefit from much increased performance.

Resources