Nginx cause CORS error after 1 minute from start uploading a file - node.js

I am using NestJs as backend with Nginx, I am getting a CORS error after 1 minute from starting uploading files, I was getting error when I start upload but I solve that after editing nginx config and increase client_max_body_size but the error occur after 1 minute since I upload a file, I tried to increase timeout by adding
server{
...
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
...
}
but this not solving my problem

I found the problem was in client side React app in axios that was setting 60s timeout

Related

504:Gateway Timeout in Web App using Docker

I am Deploying a web application When executing the app url its showing this error. In logs, displaying this error, "Waiting for response to warmup request for container". Please help me with this.
Building a Python Based App. Tried using Docker Container in Azure Portal.
Try to increase the timeout on your webserver (e.g. nginx) or/and at python level, to allow your page to execute for longer.
You will receive a 504 error when the webserver waits too long for a page to be generated. The webserver closes the http request because it has reached a timeout.
Here some Nginx configurations to increase the timeout :
server {
server_name _;
root /var/www/html/public/;
###
# increase timeouts to avoid 504 gateway timeouts, digits are seconds
# for big files uploads
###
client_max_body_size 0;
proxy_send_timeout 3600;
proxy_read_timeout 3600;
fastcgi_send_timeout 3600;
fastcgi_read_timeout 3600;
# ... your config
}
I'm not a Python developer but you may find some timeout configurations at Python level too (because there's a config in PHP to define the max_execution_time).

How can I track upload progress from app behind Nginx reverse proxy?

I have a node.js server behind an Nginx reverse proxy. The node.js app has an endpoint to receive a file upload using busboy. As the file is uploaded I would like to track progress. However, Nginx I believe buffers it, so my app receives the file all at once. How can I make it so that my node app receives the packets as soon as possible? I have tried setting the following in my nginx.conf file:
http {
....
proxy_busy_buffers_size 0;
}
and
http {
....
proxy_buffering off;
}
In the documentation it covers this. set proxy_request_buffering off; In my case, I set it has follows
location / {
...
proxy_request_buffering off;
...
}

HTTP 413 Request Entity Too Large in Node JS Project in GAE

I have my backend app deployed on GAE. Inside it, I am having an API which will upload a file to a GCS bucket.
Recently I tried uploading a file of more than 50mb size and got 413 Request entity too large
Did some research and found out that the issue is with ngnix. The API will give 413 for any file > 32Mb.
Found one solution where it was mentioned to include a ngnix.conf file and add client_max_body_size 80M in it.
I did so but still getting the same error.
This is my ngnix-app.conf file
server{
location / {
client_max_body_size 80m;
client_body_buffer_size 512k;
}
}
Anything obvious that I am missing out here?
You can change your request buffer size in your Node.Js application using
app.use(bodyParser.json({limit: '50mb'}));
app.use(bodyParser.urlencoded({limit: '50mb', extended: true}));
Also you can increase your request in nginx configuration
server{
location / {
client_max_body_size xxm;
client_body_buffer_size xxm;
}
}
Just Modify NGINX Configuration File
sudo nano /etc/nginx/nginx.conf
Search for this variable: client_max_body_size. If you find it, then just increase the value to 100M. If you can't find this then just add
this line inside HTTP.
client_max_body_size 100M;
To apply changes just restart ngnix
sudo service nginx restart
See screenshot for better understand

AWS Elastic Beanstalk 502 bad gateway on Get method

I try to implement Facebook authentification on my web application but I'm getting a 502 Bad Gateway error when my nodejs server try to send a response throw facebook/auth/callback request.
Here my request logs :
My nodejs server is deployed on elastic beanstalk and uses nginx proxy.
I read this error can occurs when response's is too big. So I tried to edit this size using the following code :
01buffer_proxy.con :
files:
"/etc/nginx/conf.d/app_proxy_buffer.conf" :
mode: "000644"
content: |
server {
location / {
proxy_buffering on;
proxy_buffer_size 16k;
proxy_buffers 32 16k;
client_body_buffer_size 128k;
proxy_busy_buffers_size 64k;
container_commands:
01_reload_nginx:
command: "service nginx reload"
}
But I still get the error.
Do you think it's a response's size problem ? If yes, how edit proxy buffer size on elastic beanstalk.
Thanks.

Nginx as proxy for node.js - failing requests

I have nginx set up as proxy to node.js for long polling like this:
location /node1 {
access_log on;
log_not_found on;
proxy_pass http://127.0.0.1:3001/node;
proxy_buffering off;
proxy_read_timeout 60;
break;
}
Unfortunately about half of long poll requests returns with error and empty response. My version of nginx is the one dreamhost offers v.0.8.53 and long poll request should be queues on the server for about 30seconds.
The case is that:
querying node.js directly like:
curl --connect-timeout 60 --max-time 60 --form "username=User" http://127.0.0.1:3001/node/poll/2/1373730895/0/0
works fine, but going through nginx:
curl --connect-timeout 60 --max-time 60 --form "username=User" http://www.mydomain.com/node1/poll/2/1373730895/0/0
is failing in half cases - the failed cases do not appear in the nginx access_log (the successful ones are there) and the curl returns:
curl: (52) Empty reply from server
It might be connected with higher traffic volume as well as I don't see that yet on other site that has lower traffic and should have pretty much similar settings.
I will be very grateful for any help on that issue or hints how to further debug it.

Resources