Nginx ingress: 502 when adding param to url - node.js

I have a nodejs rest service deployed in k8s exposed with nginx ingress. It responds to a basic get, but when I pass a URL parameter I get a 502.
import express from "express";
const app = express();
app.get("/service-invoice", async (req, res) => {
res.send(allInvoices);
}
app.listen(80);
Where allInvoices is just a collection of invoice objects loaded from MongoDB.
I deploy this to k8s with the following ingress config:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: service-invoice-read
namespace: ctx-service-invoice
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- http:
paths:
- path: /service-invoice-read(/|$)(.*)
pathType: Prefix
backend:
service:
name: service-invoice-read
port:
number: 80
Calling this with curl:
curl localhost:30000/service-invoice-read/service-invoice
I get back a valid json response. So far, so good.
But, I also want to access these objects by Id. To do that I have the following code:
app.get("/service-invoice/:id", async (req, res) => {
try {
const id = req.params.id;
const invoice = // code to load invoice by id from mongo
res.send(invoice);
} catch (e) {
res.status(sc.NOT_FOUND).send(error);
}
});
Calling this with curl:
curl localhost:30000/service-invoice-read/service-invoice/e98e03b8-b590-4ca4-978d-270986b7d26e
Results in a 502 - Bad Gateway error.
I can't see any errors in my pod's logs, so I'm pretty sure this is coming from nginx.
I don't really understand where this is coming from. I've tried without the try/catch to see in the logs if it blows up and still no joy.
Here's my ingress logs, as requested in the comments:
2022/03/03 18:45:21 [error] 847#847: *4524 upstream prematurely closed connection while reading response header from upstream, client: 10.42.1.1, server: _, request: "GET /service-invoice-read/service-invoice/6220d042a95986f58c46356f HTTP/1.1", upstream: "http://10.42.1.100:80/service-invoice/6220d042a95986f58c46356f", host: "localhost:30000"
2022/03/03 18:45:21 [error] 847#847: *4524 connect() failed (111: Connection refused) while connecting to upstream, client: 10.42.1.1, server: _, request: "GET /service-invoice-read/service-invoice/6220d042a95986f58c46356f HTTP/1.1", upstream: "http://10.42.1.100:80/service-invoice/6220d042a95986f58c46356f", host: "localhost:30000"
2022/03/03 18:45:21 [error] 847#847: *4524 connect() failed (111: Connection refused) while connecting to upstream, client: 10.42.1.1, server: _, request: "GET /service-invoice-read/service-invoice/6220d042a95986f58c46356f HTTP/1.1", upstream: "http://10.42.1.100:80/service-invoice/6220d042a95986f58c46356f", host: "localhost:30000"
10.42.1.1 - - [03/Mar/2022:18:45:21 +0000] "GET /service-invoice-read/service-invoice/6220d042a95986f58c46356f HTTP/1.1" 502 150 "-" "curl/7.68.0" 140 0.006 [ctx-service-invoice-service-invoice-read-80] [] 10.42.1.100:80, 10.42.1.100:80, 10.42.1.100:80 0, 0, 0 0.004, 0.004, 0.000 502, 502, 502 b78e6879fabe2d5947525a2b694b4b9f
W0303 18:45:21.529749 7 controller.go:1076] Service "ctx-service-invoice/service-invoice-read" does not have any active Endpoint.
Does anyone know what I'm doing wrong here?

The problem wasn't what it seemed. In this case, the configuration is working fine. The real problem is that there was an error in the code that was being suppressed by a global exception handler without being logged. For some reason, this resulted in a 502 -- though I still don't understand why I got that exact response but I'm not specifically interested.
The aim of the global exception handler is to keep the service running when it would otherwise die. Given that a service dying in k8s is perfectly acceptable I've removed this handler and allowed the pod to die, which gives me a lot more information about what is going on.

Related

Nodejs connect() failed (111: Connection refused) while connecting to upstream, client: localhost, server: , request: "GET / HTTP/1.1", upstream

I deloyed my Nodejs app to beanstalk service, I am able to Signup and Login my app, but after i Login, it just keeps buffering, not pulling any data, i checked my AWS logs and found this error
2021/10/22 23:43:09 [error] 3610#3610: *1 connect() failed (111: Connection refused) while connecting to upstream, client: ip-address, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "ip-address"
i have changed my settings on my security groups in configuration with no luck, would be glad to get some assistance here. Thank you!

sockjs-node net::ERR_CONNECTION_REFUSED [duplicate]

This question already has an answer here:
GET: ERR_SSL_PROTOCOL_ERROR nginx + vue.js
(1 answer)
Closed 3 years ago.
I created a brand new tiny webapp with vue cli, so without adding anything, apart from what the empty vue-cli scaffolding brings:
(base) marco#pc:~/vueMatters/testproject$ npm run serve
> testproject#0.1.0 serve /home/marco/vueMatters/testproject
> vue-cli-service serve
INFO Starting development server...
98% after emitting CopyPlugin
DONE Compiled successfully in 1409ms 8:14:46 PM
App running at:
- Local: localhost:8080
- Network: 192.168.1.7:8080
Note that the development build is not optimized.
To create a production build, run npm run build.
And got this error message :
GET https://localhost/sockjs-node/info?t=1580228998416 net::ERR_CONNECTION_REFUSED
node --version
v12.10.0
npm -v
6.13.6
webpack-cli#3.3.10
Ubuntu 18.04.03 Server Edition
last lines of /var/log/nginx/error.log :
2020/01/28 18:10:57 [error] 980#980: *34 connect() failed (111:
Connection refused) while connecting to upstream, client:
66.249.79.119, server: ggc.world, request: "GET /robots.txt HTTP/1.1",
upstream: "http://127.0.0.1:8080/robots.txt", host: "ggc.world"
2020/01/28 18:11:37 [error] 980#980: *36 connect() failed (111:
Connection refused) while connecting to upstream, client: 66.249.79.70,
server: ggc.world, request: "GET /robots.txt HTTP/1.1", upstream:
"http://127.0.0.1:8080/robots.txt", host: "www.ggc.world"
How to solve the problem?
You need to specify SSL Certificates in your socket connections.
Here is the reference.
var options = {
key: fs.readFileSync('./file.pem'),
cert: fs.readFileSync('./file.crt')
};
var server = https.createServer(options, app);
var io = require('socket.io')(server);

nginx-ingress controller for Azure Kubernetes Service 502 Bad Gateway

I'm having trouble getting an nginx-ingress controller to work on an Azure Kubernetes Service; it's currently returning 502 Bad Gateway each time I try to hit some Web APIs exposed as Services.
Because I must use an existing certificate, I followed https://learn.microsoft.com/en-us/azure/aks/ingress-own-tls to set up the controller and have followed https://www.markbrilman.nl/2011/08/howto-convert-a-pfx-to-a-seperate-key-crt-file/ to generate a cert and key from a PFX (how the certificate was exported from an Azure Key Vault). I created the secret "aks-ingress-tls" using the certificate including the intermediate and root ceritficates and the decrypted key file.
I have a YAML file to create a deployment, a service to expose it, and an ingress to route to it. Applying this YAML I can access the services via their IP addresses in HTTP, but using HTTPS to the Ingress Controller's EXTERNAL_IP always gives the 502 error.
My YAML File (redacted):
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-api
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: my-api
replicas: 3
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: my-api
image: [REDACTED]/my-api:1.0
ports:
- containerPort: 443
- containerPort: 80
imagePullSecrets:
- name: data-creds
---
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: my-service
spec:
ports:
- name: https
port: 443
targetPort: 443
protocol: TCP
- name: http
port: 80
targetPort: 80
protocol: TCP
selector:
app: my-api
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
tls:
- hosts:
- [REDACTED].co.uk
secretName: aks-ingress-tls
rules:
- host: [REDACTED].co.uk
http:
paths:
- path: /
backend:
serviceName: my-service
servicePort: 443
I added a record to my hosts file (I'm on Windows so can't use curl's --resolve) to map [REDACTED].co.uk to the ingress controller's EXTERNAL_IP so I can try accessing it. That's when I get the errors.
A curl -v https://[REDACTED].co.uk gives this:
VERBOSE: GET https://[REDACTED].co.uk/ with 0-byte payload
curl : The request was aborted: Could not create SSL/TLS secure channel.
At line:1 char:1
+ curl -v https://[REDACTED].co.uk
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebException
+ FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand
Looking at logs for one of the ingress controller's pods:
10.244.1.1 - [10.244.1.1] - - [25/Apr/2019:13:39:20 +0000] "GET / HTTP/2.0" 502 559 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36" 10 0.001 [default-sub360-auth-service-443] 10.244.1.254:443, 10.244.1.3:443, 10.244.1.4:443 0, 0, 0 0.000, 0.000, 0.000 502, 502, 502 e44e21c8a2f61f5137c9afdfc64c6584
2019/04/25 13:39:20 [error] 1622#1622: *1127096 connect() failed (111: Connection refused) while connecting to upstream, client: 10.244.1.1, server: [REDACTED].co.uk, request: "GET /favicon.ico HTTP/2.0", upstream: "https://10.244.1.254:443/favicon.ico", host: "[REDACTED].co.uk", referrer: "https://[REDACTED].co.uk/"
2019/04/25 13:39:20 [error] 1622#1622: *1127096 connect() failed (111: Connection refused) while connecting to upstream, client: 10.244.1.1, server: [REDACTED].co.uk, request: "GET /favicon.ico HTTP/2.0", upstream: "https://10.244.1.3:443/favicon.ico", host: "[REDACTED].co.uk", referrer: "https://[REDACTED].co.uk/"
2019/04/25 13:39:20 [error] 1622#1622: *1127096 connect() failed (111: Connection refused) while connecting to upstream, client: 10.244.1.1, server: [REDACTED].co.uk, request: "GET /favicon.ico HTTP/2.0", upstream: "https://10.244.1.4:443/favicon.ico", host: "[REDACTED].co.uk", referrer: "https://[REDACTED].co.uk/"
10.244.1.1 - [10.244.1.1] - - [25/Apr/2019:13:39:20 +0000] "GET /favicon.ico HTTP/2.0" 502 559 "https://[REDACTED].co.uk/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36" 26 0.000 [default-sub360-auth-service-443] 10.244.1.254:443, 10.244.1.3:443, 10.244.1.4:443 0, 0, 0 0.000, 0.000, 0.004 502, 502, 502 63b6ed4414bf32694de3d136f7f277aa
Can anyone point me to what I need to look at or do to get this working now?
For your issue, the ingress uses the HTTPS protocol with the port 443, so you do not need to expose the port 443 for your container. Just expose the port that which your application listens to.
For you, it means you just expose the port 80 for your container and the service. You also need to remove the annotation nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" and change the servicePort value into 80.
Note: Add the DNS name into the certificate is also important.

AWS elastic Beanstalk / nginx : connect() failed (111: Connection refused

I got this message
connect() failed (111: Connection refused
Here is my log:
-------------------------------------
/var/log/nginx/error.log
-------------------------------------
2018/10/21 06:16:33 [error] 4282#0: *2 connect() failed (111: Connection refused) while connecting to upstream, client: 172.31.4.119, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8081/", host: "hackingdeal-env.qnyexn72ga.ap-northeast-2.elasticbeanstalk.com"
2018/10/21 06:16:33 [error] 4282#0: *2 connect() failed (111: Connection refused) while connecting to upstream, client: 172.31.4.119, server: , request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:8081/favicon.ico", host: "hackingdeal-env.qnyexn72ga.ap-northeast-2.elasticbeanstalk.com", referrer: "http://hackingdeal-env.qnyexn72ga.ap-northeast-2.elasticbeanstalk.com/"
I am using nodejs/express Elastic Beanstalk env.
I have one nginx related file in
.ebextensions/nginx/conf.d/proxy.conf
Upper file contains:
client_max_body_size 50M;
Whenever I try to get my webpage I got 502 bad gateway.
What's wrong with my app?
Just recording my incident here just in case it helps someone or my future self. I had a Django application that had SECURE_SSL_REDIRECT set to True. Since I had no load balancers configured to handle HTTPS traffic I was getting a timeout. Setting it to False fixed the issue. Couple of days wasted on that one.
111 connection refused likely means your app isn't running on the server/port combination. Also check that the security group for your app instance (or load balancer) has an inbound rule set to allow traffic from the nginx instance
I was dealing with this error on my NodeJS application (NEXTJS). Posting this here just in case is useful for someone.
My error was thet the deploy command failed at the build step (next build), which means the Node server never restarted. For that reason nginx could not find the server. You can find this kind of errors in the web.stdout.log
I tested my build command locally, fixed the errors and it worked!

nginx node upstream prematurely closed connection while reading response header

I have a nodejs server running behind nginx on elastic beanstalk. I think the following error is because of a promise that isn't being activated?
Does anyone have a better idea about what it is?
2017/03/20 12:18:02 [error] 3503#0: *7363 upstream prematurely closed
connection while reading response header from upstream, client: 111.11.11.111,
server: , request: "POST /api/v1/some/url HTTP/1.1", upstream:
"http://127.0.0.1:8081/api/v1/some/url", host: "some.website.com"
we are also looking to solve that issue. From what I found, it looks like it is because your nodejs does not give a response fast enough. Hope it helps...

Resources