AWS Elastic Beanstalk 502 bad gateway on Get method - node.js

I try to implement Facebook authentification on my web application but I'm getting a 502 Bad Gateway error when my nodejs server try to send a response throw facebook/auth/callback request.
Here my request logs :
My nodejs server is deployed on elastic beanstalk and uses nginx proxy.
I read this error can occurs when response's is too big. So I tried to edit this size using the following code :
01buffer_proxy.con :
files:
"/etc/nginx/conf.d/app_proxy_buffer.conf" :
mode: "000644"
content: |
server {
location / {
proxy_buffering on;
proxy_buffer_size 16k;
proxy_buffers 32 16k;
client_body_buffer_size 128k;
proxy_busy_buffers_size 64k;
container_commands:
01_reload_nginx:
command: "service nginx reload"
}
But I still get the error.
Do you think it's a response's size problem ? If yes, how edit proxy buffer size on elastic beanstalk.
Thanks.

Related

504:Gateway Timeout in Web App using Docker

I am Deploying a web application When executing the app url its showing this error. In logs, displaying this error, "Waiting for response to warmup request for container". Please help me with this.
Building a Python Based App. Tried using Docker Container in Azure Portal.
Try to increase the timeout on your webserver (e.g. nginx) or/and at python level, to allow your page to execute for longer.
You will receive a 504 error when the webserver waits too long for a page to be generated. The webserver closes the http request because it has reached a timeout.
Here some Nginx configurations to increase the timeout :
server {
server_name _;
root /var/www/html/public/;
###
# increase timeouts to avoid 504 gateway timeouts, digits are seconds
# for big files uploads
###
client_max_body_size 0;
proxy_send_timeout 3600;
proxy_read_timeout 3600;
fastcgi_send_timeout 3600;
fastcgi_read_timeout 3600;
# ... your config
}
I'm not a Python developer but you may find some timeout configurations at Python level too (because there's a config in PHP to define the max_execution_time).

How to solve AWS Elastic Beanstalk 504 Timeout Error?

I am using AWS Elastic Beanstalk for hosting Express/Node.js API server.
It's working well with just normal APIs but I am getting this 504 Timeout error with only one API which may take time for more than 20 mins at max.
So, I thought I needed to increase max request time of Nginx and Node.js server and I did it by configuring AWS EB .extensions and .platform variables.
Here is what I did.
.platform/nginx/conf.d/timeout.conf
client_header_timeout 3000s;
client_body_timeout 3000s;
send_timeout 3000s;
proxy_connect_timeout 3000s;
proxy_read_timeout 3000s;
proxy_send_timeout 3000s;
.ebextensions/network.config
option_settings:
- namespace: aws:elasticbeanstalk:command
option_name: Timeout
value: 3000
But I am still getting this error and I can't understand why this is happening.
Plus Note: Elastic Beanstalk server is covered by CloudFront and AWS Route 53 for giving it public domain address and HTTPS connection.
If somebody knows how to fix this, it will be appreciated a lot.
In case you are using a "Load balanced" environment type, check the "Connection idle timeout" setting of the Load Balancer.
To validate if your env uses a ELB go to "Elastic Beanstalk" -> "<your_environment> -> "Configuration" section and check if the "Load balancer" category is present. Here you can find also the type of ELB you are using. Then change the Connection idle timeout setting in the EC2 console to a proper value.

Nginx cause CORS error after 1 minute from start uploading a file

I am using NestJs as backend with Nginx, I am getting a CORS error after 1 minute from starting uploading files, I was getting error when I start upload but I solve that after editing nginx config and increase client_max_body_size but the error occur after 1 minute since I upload a file, I tried to increase timeout by adding
server{
...
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
...
}
but this not solving my problem
I found the problem was in client side React app in axios that was setting 60s timeout

How can I track upload progress from app behind Nginx reverse proxy?

I have a node.js server behind an Nginx reverse proxy. The node.js app has an endpoint to receive a file upload using busboy. As the file is uploaded I would like to track progress. However, Nginx I believe buffers it, so my app receives the file all at once. How can I make it so that my node app receives the packets as soon as possible? I have tried setting the following in my nginx.conf file:
http {
....
proxy_busy_buffers_size 0;
}
and
http {
....
proxy_buffering off;
}
In the documentation it covers this. set proxy_request_buffering off; In my case, I set it has follows
location / {
...
proxy_request_buffering off;
...
}

Nginx 502 Bad Gateway when Jenkins pipeline is running a docker for React app

I have an Ubuntu 18.04 server running in a Droplet (DigitalOcean) secured with SSL and using an Nginx reverse proxy. Also Jenkins in running in my server (not in any docker) and configured to be accessed under the domain I created for it: jenkins.testdomain.com (all these steps following DO docs)
So the goal is to manage the deployment of a Node.js-React application to my testdomain.com later, by now, I just want to create the dist folder generated, after the 'npm build', within the /var/lib/jenkins/workspace/ , just that.
By now, I'm able to access my jenkins.testdomain.com site alright, trigger the pipeline to start the process after pushing to my repo, and start to run the stages; but it's here when start to fail nginx, when the pipeline reaches the Deliver phase (let's read 'npm build' phase), sometimes in the Build phase ('npm install').
It's at this point, reading the Jenkins console output where I see when it gets stuck and eventually shows a 502 Bad Gateway error. I will need to run the command systemctl restart Jenkins in my server console, to have access again. After restarting, the pipeline resume the work and seems to get the job done :/
In the /var/log/nginx/error.log for nginx I can read:
*1 connect() failed (111: Connection refused) while connecting to upstream, client: 85.146.85.194, server: jenkins.testdomain.com,
request: "GET /job/Basic%20NodeJS-React%20app/8/console HTTP/1.1",
upstream: "https:
//127.0.0.1:8080/job/Basic%20NodeJS-React%20app/8/console", host:
"jenkins.testdomain.com", referrer: "https:
//jenkins.testdomain.com/job/Basic%20NodeJS-React%20app/8/"
*1 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking
to upstream, client: 85.146.85.194, server: jenkins.testdomain.com,
request: "GET /favicon.ico HTTP/1.1", upstream: "https:
//127.0.0.1:8080/favicon.ico", host: "jenkins.testdomain.com",
referrer: "https:
//jenkins.testdomain.com/job/Basic%20NodeJS-React%20app/8/console" ...
In the Jenkinsfile of my node-js-react app (from jenkins repo), the agent looks like this:
pipeline {
agent {
docker {
image 'node:6-alpine'
args '-p 3000:80'
}
}
environment {
CI = 'true'
}
stages {
// Build, Test, and Deliver stages
}
}
And my jenkins.testdomain.com configuration (/etc/nginx/sites-available/jenkins.testdomain.com) is like this (pass tests from nginx -t):
server {
listen 80;
root /var/www/jenkins.testdomain.com/html;
server_name jenkins.testdomain.com www.jenkins.testdomain.com;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Fix the "It appears that your reverse proxy set up is broken" error.
proxy_pass http://localhost:8080;
# High timeout for testing
proxy_connect_timeout 1200s;
proxy_send_timeout 1200s;
proxy_read_timeout 1200s;
proxy_redirect http://localhost:8080 https://jenkins.testdomain.com;
# Required for new HTTP-based CLI
proxy_http_version 1.1;
proxy_request_buffering off;
# Required for HTTP-based CLI to work over SSL
proxy_buffering off;
}
# Certbot auto-generated lines...
}
Any help would be very welcomed 3 days struggling with this and playing around with the different proxy_ directives from nginx and so...
Thanks in advance!
OK just add an update that some days after my latest post, I realized that the main and only reason the server was going down was a lack of resources in the droplet.
So I was using a droplet with 1GB of RAM, 25GB HD, etc.. (the most basic one), so I chose to update it to use at least 2GB of RAM and indeed, that made it work as I was expecting. Everything until now works fine and that issue didn’t happen again.
Hope it helps if someone experiences the same issue.

Resources